* [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
@ 2018-11-22 3:30 Honnappa Nagarahalli
2018-11-22 3:30 ` [dpdk-dev] [RFC 1/3] log: add TQS log type Honnappa Nagarahalli
` (14 more replies)
0 siblings, 15 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-22 3:30 UTC (permalink / raw)
To: dev; +Cc: nd, honnappa.nagarahalli, dharmik.thakkar, malvika.gupta, gavin.hu
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures simultaneously. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the element from the
data structure but does not return the associated memory to the allocator.
This will ensure that new readers will not get a reference to the removed
element. Removing the reference is an atomic operation.
Free: in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the removed element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of Thread Quiescent State (TQS). TQS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
As shown thread RT1 acesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot return the memory associated with that element to the
allocator. The writer can return the memory to the allocator only after
the reader stops referencng D2. In other words, reader thread RT1
has to enter a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
For DPDK applications, the start and end of while(1) loop (where no shared
data structures are getting accessed) act as perfect quiescent states. This
will combine all the shared data structure accesses into a single critical
section and keeps the over head introduced by this library to the minimum.
However, the length of the critical section and the number of reader threads
is proportional to the time taken to identify the end of critical section.
So, if the application desires, it should be possible to identify the end
of critical section for each data structure.
To provide the required flexibility, this library has a concept of TQS
variable. The application can create one or more TQS variables to help it
track the end of one or more critical sections.
The application can create a TQS variable using the API rte_tqs_alloc.
It takes a mask of lcore IDs that will report their quiescent states
using this variable. This mask can be empty to start with.
rte_tqs_register_lcore API will register a reader thread to report its
quiescent state. This can be called from any control plane thread or from
the reader thread. The application can create a TQS variable with no reader
threads and add the threads dynamically using this API.
The application can trigger the reader threads to report their quiescent
state status by calling the API rte_tqs_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_tqs_start returns a token to each caller.
The application has to call rte_tqs_check API with the token to get the
current status. Option to block till all the threads enter the quiescent
state is provided. If this API indicates that all the threads have entered
the quiescent state, the application can free the deleted entry.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of waiting for the
reader threads to enter the quiescent state.
rte_tqs_unregister_lcore API will remove a reader thread from reporting its
quiescent state using a TQS variable. The rte_tqs_check API will not wait
for this reader thread to report the quiescent state status anymore.
Finally, a TQS variable can be deleted by calling rte_tqs_free API.
Application must make sure that the reader threads are not referencing the
TQS variable anymore before deleting it.
The reader threads should call rte_tqs_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Next Steps:
1) Add more test cases
2) Convert to patch
3) Incorporate feedback from community
4) Add documentation
Dharmik Thakkar (1):
test/tqs: Add API and functional tests
Honnappa Nagarahalli (2):
log: add TQS log type
tqs: add thread quiescent state library
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_eal/common/include/rte_log.h | 1 +
lib/librte_tqs/Makefile | 23 +
lib/librte_tqs/meson.build | 5 +
lib/librte_tqs/rte_tqs.c | 249 +++++++++++
lib/librte_tqs/rte_tqs.h | 352 +++++++++++++++
lib/librte_tqs/rte_tqs_version.map | 16 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
test/test/Makefile | 2 +
test/test/autotest_data.py | 6 +
test/test/meson.build | 5 +-
test/test/test_tqs.c | 540 ++++++++++++++++++++++++
14 files changed, 1208 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_tqs/Makefile
create mode 100644 lib/librte_tqs/meson.build
create mode 100644 lib/librte_tqs/rte_tqs.c
create mode 100644 lib/librte_tqs/rte_tqs.h
create mode 100644 lib/librte_tqs/rte_tqs_version.map
create mode 100644 test/test/test_tqs.c
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC 1/3] log: add TQS log type
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
@ 2018-11-22 3:30 ` Honnappa Nagarahalli
2018-11-27 22:24 ` Stephen Hemminger
2018-11-22 3:30 ` [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (13 subsequent siblings)
14 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-22 3:30 UTC (permalink / raw)
To: dev; +Cc: nd, honnappa.nagarahalli, dharmik.thakkar, malvika.gupta, gavin.hu
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
lib/librte_eal/common/include/rte_log.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 2f789cb90..b4e91a4a5 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -61,6 +61,7 @@ extern struct rte_logs rte_logs;
#define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */
#define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */
#define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */
+#define RTE_LOGTYPE_TQS 21 /**< Log related to Thread Quiescent State. */
/* these log types can be used in an application */
#define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
2018-11-22 3:30 ` [dpdk-dev] [RFC 1/3] log: add TQS log type Honnappa Nagarahalli
@ 2018-11-22 3:30 ` Honnappa Nagarahalli
2018-11-24 12:18 ` Ananyev, Konstantin
2018-11-22 3:30 ` [dpdk-dev] [RFC 3/3] test/tqs: Add API and functional tests Honnappa Nagarahalli
` (12 subsequent siblings)
14 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-22 3:30 UTC (permalink / raw)
To: dev; +Cc: nd, honnappa.nagarahalli, dharmik.thakkar, malvika.gupta, gavin.hu
Add Thread Quiescent State (TQS) library. This library helps identify
the quiescent state of the reader threads so that the writers
can free the memory associated with the lock less data structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_tqs/Makefile | 23 ++
lib/librte_tqs/meson.build | 5 +
lib/librte_tqs/rte_tqs.c | 249 ++++++++++++++++++++
lib/librte_tqs/rte_tqs.h | 352 +++++++++++++++++++++++++++++
lib/librte_tqs/rte_tqs_version.map | 16 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
9 files changed, 655 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_tqs/Makefile
create mode 100644 lib/librte_tqs/meson.build
create mode 100644 lib/librte_tqs/rte_tqs.c
create mode 100644 lib/librte_tqs/rte_tqs.h
create mode 100644 lib/librte_tqs/rte_tqs_version.map
diff --git a/config/common_base b/config/common_base
index d12ae98bc..af40a9f81 100644
--- a/config/common_base
+++ b/config/common_base
@@ -792,6 +792,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_tqs
+#
+CONFIG_RTE_LIBRTE_TQS=y
+CONFIG_RTE_LIBRTE_TQS_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index b7370ef97..7095eac88 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -108,6 +108,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_TQS) += librte_tqs
+DEPDIRS-librte_tqs := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_tqs/Makefile b/lib/librte_tqs/Makefile
new file mode 100644
index 000000000..059de53e2
--- /dev/null
+++ b/lib/librte_tqs/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_tqs.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_tqs_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_tqs.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_tqs.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_tqs/meson.build b/lib/librte_tqs/meson.build
new file mode 100644
index 000000000..dd696ab07
--- /dev/null
+++ b/lib/librte_tqs/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_tqs.c')
+headers = files('rte_tqs.h')
diff --git a/lib/librte_tqs/rte_tqs.c b/lib/librte_tqs/rte_tqs.c
new file mode 100644
index 000000000..cbc36864e
--- /dev/null
+++ b/lib/librte_tqs/rte_tqs.c
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_tqs.h"
+
+TAILQ_HEAD(rte_tqs_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_tqs_tailq = {
+ .name = RTE_TAILQ_TQS_NAME,
+};
+EAL_REGISTER_TAILQ(rte_tqs_tailq)
+
+/* Allocate a new TQS variable with the name *name* in memory. */
+struct rte_tqs * __rte_experimental
+rte_tqs_alloc(const char *name, int socket_id, uint64_t lcore_mask)
+{
+ char tqs_name[RTE_TQS_NAMESIZE];
+ struct rte_tailq_entry *te, *tmp_te;
+ struct rte_tqs_list *tqs_list;
+ struct rte_tqs *v, *tmp_v;
+ int ret;
+
+ if (name == NULL) {
+ RTE_LOG(ERR, TQS, "Invalid input parameters\n");
+ rte_errno = -EINVAL;
+ return NULL;
+ }
+
+ te = rte_zmalloc("TQS_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ RTE_LOG(ERR, TQS, "Cannot reserve memory for tailq\n");
+ rte_errno = -ENOMEM;
+ return NULL;
+ }
+
+ snprintf(tqs_name, sizeof(tqs_name), "%s", name);
+ v = rte_zmalloc_socket(tqs_name, sizeof(struct rte_tqs),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (v == NULL) {
+ RTE_LOG(ERR, TQS, "Cannot reserve memory for TQS variable\n");
+ rte_errno = -ENOMEM;
+ goto alloc_error;
+ }
+
+ ret = snprintf(v->name, sizeof(v->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(v->name)) {
+ rte_errno = -ENAMETOOLONG;
+ goto alloc_error;
+ }
+
+ te->data = (void *) v;
+ v->lcore_mask = lcore_mask;
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ tqs_list = RTE_TAILQ_CAST(rte_tqs_tailq.head, rte_tqs_list);
+
+ /* Search if a TQS variable with the same name exists already */
+ TAILQ_FOREACH(tmp_te, tqs_list, next) {
+ tmp_v = (struct rte_tqs *) tmp_te->data;
+ if (strncmp(name, tmp_v->name, RTE_TQS_NAMESIZE) == 0)
+ break;
+ }
+
+ if (tmp_te != NULL) {
+ rte_errno = -EEXIST;
+ goto tqs_exist;
+ }
+
+ TAILQ_INSERT_TAIL(tqs_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return v;
+
+tqs_exist:
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+alloc_error:
+ rte_free(te);
+ rte_free(v);
+ return NULL;
+}
+
+/* De-allocate all the memory used by a TQS variable. */
+void __rte_experimental
+rte_tqs_free(struct rte_tqs *v)
+{
+ struct rte_tqs_list *tqs_list;
+ struct rte_tailq_entry *te;
+
+ /* Search for the TQS variable in tailq */
+ tqs_list = RTE_TAILQ_CAST(rte_tqs_tailq.head, rte_tqs_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, tqs_list, next) {
+ if (te->data == (void *) v)
+ break;
+ }
+
+ if (te != NULL)
+ TAILQ_REMOVE(tqs_list, te, next);
+ else
+ RTE_LOG(ERR, TQS, "TQS variable %s not found\n", v->name);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+ rte_free(v);
+}
+
+/* Add a reader thread, running on an lcore, to the list of threads
+ * reporting their quiescent state on a TQS variable.
+ */
+int __rte_experimental
+rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id)
+{
+ TQS_RETURN_IF_TRUE((v == NULL || lcore_id >= RTE_TQS_MAX_LCORE),
+ -EINVAL);
+
+ /* Worker thread has to count the quiescent states
+ * only from the current value of token.
+ */
+ v->w[lcore_id].cnt = v->token;
+
+ /* Release the store to initial TQS count so that workers
+ * can use it immediately after this function returns.
+ */
+ __atomic_fetch_or(&v->lcore_mask, (1UL << lcore_id), __ATOMIC_RELEASE);
+
+ return 0;
+}
+
+/* Remove a reader thread, running on an lcore, from the list of threads
+ * reporting their quiescent state on a TQS variable.
+ */
+int __rte_experimental
+rte_tqs_unregister_lcore(struct rte_tqs *v, unsigned int lcore_id)
+{
+ TQS_RETURN_IF_TRUE((v == NULL ||
+ lcore_id >= RTE_TQS_MAX_LCORE), -EINVAL);
+
+ /* This can be a relaxed store. Since this is an API, make sure
+ * the store is not reordered with other memory operations.
+ */
+ __atomic_fetch_and(&v->lcore_mask,
+ ~(1UL << lcore_id), __ATOMIC_RELEASE);
+
+ return 0;
+}
+
+/* Search a TQS variable, given its name. */
+struct rte_tqs * __rte_experimental
+rte_tqs_lookup(const char *name)
+{
+ struct rte_tqs_list *tqs_list;
+ struct rte_tailq_entry *te;
+ struct rte_tqs *v;
+
+ if (name == NULL) {
+ RTE_LOG(ERR, TQS, "Invalid input parameters\n");
+ rte_errno = -EINVAL;
+ return NULL;
+ }
+
+ v = NULL;
+ tqs_list = RTE_TAILQ_CAST(rte_tqs_tailq.head, rte_tqs_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, tqs_list, next) {
+ v = (struct rte_tqs *) te->data;
+ if (strncmp(name, v->name, RTE_TQS_NAMESIZE) == 0)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_errno = -ENOENT;
+ v = NULL;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return v;
+}
+
+/* Dump the details of a single TQS variables to a file. */
+void __rte_experimental
+rte_tqs_dump(FILE *f, struct rte_tqs *v)
+{
+ uint64_t tmp_mask;
+ uint32_t i;
+
+ TQS_ERR_LOG_IF_TRUE(v == NULL || f == NULL);
+
+ fprintf(f, "\nTQS <%s>@%p\n", v->name, v);
+ fprintf(f, " lcore mask = 0x%lx\n", v->lcore_mask);
+ fprintf(f, " token = %u\n", v->token);
+
+ if (v->lcore_mask != 0) {
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ tmp_mask = v->lcore_mask;
+ while (tmp_mask) {
+ i = __builtin_ctz(tmp_mask);
+ fprintf(f, "lcore # = %d, count = %u\n",
+ i, v->w[i].cnt);
+ tmp_mask &= ~(1UL << i);
+ }
+ }
+}
+
+/* Dump the details of all the TQS variables to a file. */
+void __rte_experimental
+rte_tqs_list_dump(FILE *f)
+{
+ const struct rte_tailq_entry *te;
+ struct rte_tqs_list *tqs_list;
+
+ TQS_ERR_LOG_IF_TRUE(f == NULL);
+
+ tqs_list = RTE_TAILQ_CAST(rte_tqs_tailq.head, rte_tqs_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, tqs_list, next) {
+ rte_tqs_dump(f, (struct rte_tqs *) te->data);
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+}
diff --git a/lib/librte_tqs/rte_tqs.h b/lib/librte_tqs/rte_tqs.h
new file mode 100644
index 000000000..9136418d2
--- /dev/null
+++ b/lib/librte_tqs/rte_tqs.h
@@ -0,0 +1,352 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_TQS_H_
+#define _RTE_TQS_H_
+
+/**
+ * @file
+ * RTE Thread Quiescent State
+ *
+ * Thread Quiescent State (TQS) is any point in the thread execution
+ * where the thread does not hold a reference to shared memory, i.e.
+ * a non-critical section. A critical section for a data structure can
+ * be a quiescent state for another data structure.
+ *
+ * An application can identify the quiescent state according to its
+ * needs. It can identify 1 quiescent state for each data structure or
+ * 1 quiescent state for a group/all of data structures.
+ *
+ * This library provides the flexibility for these use cases.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+
+#define RTE_TAILQ_TQS_NAME "RTE_TQS"
+
+/**< The maximum length of a TQS variable name. */
+#define RTE_TQS_NAMESIZE 32
+
+/**< Maximum number of lcores supported. */
+#if (RTE_MAX_LCORE > 64)
+#define RTE_TQS_MAX_LCORE 64
+#else
+#define RTE_TQS_MAX_LCORE RTE_MAX_LCORE
+#endif
+
+/* Macro for run-time checking of function parameters */
+#if defined(RTE_LIBRTE_TQS_DEBUG)
+#define TQS_RETURN_IF_TRUE(cond, retval) do { \
+ if ((cond)) \
+ return retval; \
+} while (0)
+#else
+#define TQS_RETURN_IF_TRUE(cond, retval)
+#endif
+
+/* Macro to log error message */
+#define TQS_ERR_LOG_IF_TRUE(cond) do { \
+ if ((cond)) { \
+ RTE_LOG(ERR, TQS, "Invalid parameters\n"); \
+ return; \
+ } \
+} while (0)
+
+/* Worker thread counter */
+struct rte_tqs_cnt {
+ volatile uint32_t cnt; /**< Quiescent state counter. */
+} __rte_cache_aligned;
+
+/**
+ * RTE Thread Quiescent State structure.
+ */
+struct rte_tqs {
+ char name[RTE_TQS_NAMESIZE];
+ /**< Name of the ring. */
+ uint64_t lcore_mask;
+ /**< Worker lcores reporting on this TQS */
+
+ uint32_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple simultaneous TQS queries */
+
+ struct rte_tqs_cnt w[RTE_TQS_MAX_LCORE] __rte_cache_aligned;
+ /**< TQS counter for each worker thread, counts upto
+ * current value of token.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Allocate a new TQS variable with the name *name* in memory.
+ *
+ * The TQS variable is added in RTE_TAILQ_TQS list.
+ *
+ * @param name
+ * The name of the TQS variable.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param lcore_mask
+ * Data plane reader threads in this mask will report their quiescent
+ * state on this TQS variable.
+ * @return
+ * On success, the pointer to the new allocated TQS variable. NULL on
+ * error with rte_errno set appropriately. Possible errno values include:
+ * - EINVAL - invalid input parameters
+ * - ENAMETOOLONG - TQS variable name is longer than RTE_TQS_NAMESIZE
+ * - EEXIST - a TQS variable with the same name already exists
+ * - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_tqs * __rte_experimental
+rte_tqs_alloc(const char *name, int socket_id, uint64_t lcore_mask);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * De-allocate all the memory used by a TQS variable. It is the
+ * application's responsibility to make sure that no other thread
+ * is using the TQS variable.
+ *
+ * The TQS variable is removed from RTE_TAILQ_TQS list.
+ *
+ * @param v
+ * TQS variable to free.
+ */
+void __rte_experimental rte_tqs_free(struct rte_tqs *v);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a worker thread, running on an lcore, to the list of threads
+ * reporting their quiescent state on a TQS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe. This API can be called from the worker threads during
+ * initialization. Any ongoing TQS queries may wait for the
+ * status from this registered worker thread.
+ *
+ * @param v
+ * TQS variable
+ * @param lcore_id
+ * Worker thread on this lcore will report its quiescent state on
+ * this TQS variable.
+ * @return
+ * - 0 if registered successfully.
+ * - -EINVAL if the parameters are invalid (debug mode compilation only).
+ */
+int __rte_experimental
+rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a worker thread, running on an lcore, from the list of threads
+ * reporting their quiescent state on a TQS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the worker threads during shutdown.
+ * Any ongoing TQS queries may stop waiting for the status from this
+ * unregistered worker thread.
+ *
+ * @param v
+ * TQS variable
+ * @param lcore_id
+ * Worker thread on this lcore will stop reporting its quiescent state
+ * on this TQS variable.
+ * @return
+ * - 0 if un-registered successfully.
+ * - -EINVAL if the parameters are invalid (debug mode compilation only).
+ */
+int __rte_experimental
+rte_tqs_unregister_lcore(struct rte_tqs *v, unsigned int lcore_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Trigger the worker threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * @param v
+ * TQS variable
+ * @param n
+ * Expected number of times the quiescent state is entered
+ * @param t
+ * - If successful, this is the token for this call of the API.
+ * This should be passed to rte_tqs_check API.
+ * @return
+ * - -EINVAL if the parameters are invalid (debug mode compilation only).
+ * - 0 Otherwise and always (non-debug mode compilation).
+ */
+static __rte_always_inline int __rte_experimental
+rte_tqs_start(struct rte_tqs *v, unsigned int n, uint32_t *t)
+{
+ TQS_RETURN_IF_TRUE((v == NULL || t == NULL), -EINVAL);
+
+ /* This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE);
+
+ return 0;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for the worker thread on a lcore.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the worker threads registered to report their quiescent state
+ * on the TQS variable must call this API.
+ *
+ * @param v
+ * TQS variable
+ */
+static __rte_always_inline void __rte_experimental
+rte_tqs_update(struct rte_tqs *v, unsigned int lcore_id)
+{
+ uint32_t t;
+
+ TQS_ERR_LOG_IF_TRUE(v == NULL || lcore_id >= RTE_TQS_MAX_LCORE);
+
+ /* Load the token before the worker thread loads any other
+ * (lock-free) data structure. This ensures that updates
+ * to the data structures are visible if the update
+ * to token is visible.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+ if (v->w[lcore_id].cnt != t)
+ v->w[lcore_id].cnt++;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the worker threads have entered the quiescent state
+ * 'n' number of times. 'n' is provided in rte_tqs_start API.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * @param v
+ * TQS variable
+ * @param t
+ * Token returned by rte_tqs_start API
+ * @param wait
+ * If true, block till all the worker threads have completed entering
+ * the quiescent state 'n' number of times
+ * @return
+ * - 0 if all worker threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all worker threads have passed through specified number
+ * of quiescent states.
+ * - -EINVAL if the parameters are invalid (debug mode compilation only).
+ */
+static __rte_always_inline int __rte_experimental
+rte_tqs_check(struct rte_tqs *v, uint32_t t, bool wait)
+{
+ uint64_t l;
+ uint64_t lcore_mask;
+
+ TQS_RETURN_IF_TRUE((v == NULL), -EINVAL);
+
+ do {
+ /* Load the current lcore_mask before loading the
+ * worker thread quiescent state counters.
+ */
+ lcore_mask = __atomic_load_n(&v->lcore_mask, __ATOMIC_ACQUIRE);
+
+ while (lcore_mask) {
+ l = __builtin_ctz(lcore_mask);
+ if (v->w[l].cnt != t)
+ break;
+
+ lcore_mask &= ~(1UL << l);
+ }
+
+ if (lcore_mask == 0)
+ return 1;
+
+ rte_pause();
+ } while (wait);
+
+ return 0;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Search a TQS variable, given its name.
+ *
+ * It is multi-thread safe.
+ *
+ * @param name
+ * The name of the TQS variable.
+ * @return
+ * On success, the pointer to the TQS variable. NULL on
+ * error with rte_errno set appropriately. Possible errno values include:
+ * - EINVAL - invalid input parameters.
+ * - ENOENT - entry not found.
+ */
+struct rte_tqs * __rte_experimental
+rte_tqs_lookup(const char *name);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single TQS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param tqs
+ * TQS variable
+ */
+void __rte_experimental
+rte_tqs_dump(FILE *f, struct rte_tqs *v);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of all the TQS variables to a file.
+ *
+ * It is multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ */
+void __rte_experimental
+rte_tqs_list_dump(FILE *f);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_TQS_H_ */
diff --git a/lib/librte_tqs/rte_tqs_version.map b/lib/librte_tqs/rte_tqs_version.map
new file mode 100644
index 000000000..2e4d5c094
--- /dev/null
+++ b/lib/librte_tqs/rte_tqs_version.map
@@ -0,0 +1,16 @@
+EXPERIMENTAL {
+ global:
+
+ rte_tqs_alloc;
+ rte_tqs_free;
+ rte_tqs_register_lcore;
+ rte_tqs_unregister_lcore;
+ rte_tqs_start;
+ rte_tqs_update;
+ rte_tqs_check;
+ rte_tqs_lookup;
+ rte_tqs_list_dump;
+ rte_tqs_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index bb7f443f9..ee19c483e 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -21,7 +21,7 @@ libraries = [ 'compat', # just a header, used for versioning
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'meter', 'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'tqs', 'vhost',
# add pkt framework libs which use other libs from above
'port', 'table', 'pipeline',
# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 3ebc4e64c..6e19e669a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -92,6 +92,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_TQS) += -lrte_tqs
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC 3/3] test/tqs: Add API and functional tests
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
2018-11-22 3:30 ` [dpdk-dev] [RFC 1/3] log: add TQS log type Honnappa Nagarahalli
2018-11-22 3:30 ` [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library Honnappa Nagarahalli
@ 2018-11-22 3:30 ` Honnappa Nagarahalli
[not found] ` <CGME20181122073110eucas1p17592400af6c0b807dc87e90d136575af@eucas1p1.samsung.com>
` (11 subsequent siblings)
14 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-22 3:30 UTC (permalink / raw)
To: dev; +Cc: nd, honnappa.nagarahalli, dharmik.thakkar, malvika.gupta, gavin.hu
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases and functional tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
test/test/Makefile | 2 +
test/test/autotest_data.py | 6 +
test/test/meson.build | 5 +-
test/test/test_tqs.c | 540 +++++++++++++++++++++++++++++++++++++
4 files changed, 552 insertions(+), 1 deletion(-)
create mode 100644 test/test/test_tqs.c
diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..7a07039e7 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_TQS) += test_tqs.c
+
CFLAGS += -DALLOW_EXPERIMENTAL_API
CFLAGS += -O3
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index 0fb7866db..e676757cd 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -676,6 +676,12 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "Thread Quiescent State autotest",
+ "Command": "tqs_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/test/test/meson.build b/test/test/meson.build
index 554e9945f..b80a449ad 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -100,6 +100,7 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'tet_tqs.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -122,7 +123,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'tqs'
]
test_names = [
@@ -228,6 +230,7 @@ test_names = [
'timer_autotest',
'timer_perf__autotest',
'timer_racecond_autotest',
+ 'tqs_autotest',
'user_delay_us',
'version_autotest',
]
diff --git a/test/test/test_tqs.c b/test/test/test_tqs.c
new file mode 100644
index 000000000..8633a80a6
--- /dev/null
+++ b/test/test/test_tqs.c
@@ -0,0 +1,540 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_tqs.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TQS_RETURN_IF_ERROR(tqs, cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ if (tqs) \
+ rte_tqs_free(tqs); \
+ return -1; \
+ } \
+} while (0)
+
+#define RTE_TQS_MAX_LCORE 64
+uint16_t enabled_core_ids[RTE_TQS_MAX_LCORE];
+uint64_t core_mask;
+uint32_t sw_token;
+uint16_t num_1qs = 1; /* Number of quiescent states = 1 */
+
+struct node {
+ int data;
+ struct node *next;
+};
+
+struct node *head = NULL, *lastNode;
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint32_t i;
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > RTE_TQS_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", RTE_TQS_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ core_mask = 0;
+ i = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[i] = core_id;
+ i++;
+ core_mask |= 1UL << core_id;
+ }
+
+ return 0;
+}
+
+/*
+ * rte_tqs_alloc: Allocate a TQS variable
+ */
+static int
+test_tqs_alloc(void)
+{
+ const char name32[] = "xyzxyzxyzxyzxyzxyzxyzxyzxyzx123";
+ const char name33[] = "xyzxyizxyzxyzxyzxyzxyzxyzxyzxyzxyz";
+ const char name3[] = "xyz";
+ struct rte_tqs *t;
+
+ printf("Test rte_tqs_alloc()\n");
+
+ t = rte_tqs_alloc(NULL, SOCKET_ID_ANY, core_mask);
+ TQS_RETURN_IF_ERROR(t, (t != NULL), "NULL TQS variable");
+
+ t = rte_tqs_alloc(name3, SOCKET_ID_ANY, core_mask);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Variable name < %d",
+ RTE_TQS_NAMESIZE);
+ rte_tqs_free(t);
+
+ t = rte_tqs_alloc(name32, SOCKET_ID_ANY, core_mask);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Variable name < %d",
+ RTE_TQS_NAMESIZE);
+ rte_tqs_free(t);
+
+ t = rte_tqs_alloc(name33, SOCKET_ID_ANY, core_mask);
+ TQS_RETURN_IF_ERROR(t, (t != NULL), "Variable name > %d",
+ RTE_TQS_NAMESIZE);
+
+ t = rte_tqs_alloc(name3, 0, core_mask);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Valid socket ID");
+ rte_tqs_free(t);
+
+ t = rte_tqs_alloc(name3, 10000, core_mask);
+ TQS_RETURN_IF_ERROR(t, (t != NULL), "Invalid socket ID");
+
+ t = rte_tqs_alloc(name3, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "0 core mask");
+ rte_tqs_free(t);
+
+ return 0;
+}
+
+/*
+ * rte_tqs_register_lcore: Register threads
+ */
+static int
+test_tqs_register_lcore(void)
+{
+ struct rte_tqs *t;
+ const char *name = "TQS";
+ int ret;
+
+ printf("\nTest rte_tqs_register_lcore()\n");
+
+ t = rte_tqs_alloc(name, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Failed to alloc TQS variable");
+
+ ret = rte_tqs_register_lcore(t, enabled_core_ids[0]);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL),
+ "lcore_id < RTE_TQS_MAX_LCORE");
+ rte_tqs_free(t);
+ return 0;
+}
+
+/*
+ * rte_tqs_unregister_lcore: Unregister threads
+ */
+static int
+test_tqs_unregister_lcore(void)
+{
+ struct rte_tqs *t;
+ const char *name = "TQS";
+ int i, ret;
+
+ printf("\nTest rte_tqs_unregister_lcore()\n");
+
+ t = rte_tqs_alloc(name, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Failed to alloc TQS variable");
+
+ ret = rte_tqs_register_lcore(t, enabled_core_ids[0]);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL), "Register lcore failed");
+
+ /* Find first disabled core */
+ for (i = 0; i < RTE_TQS_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ ret = rte_tqs_unregister_lcore(t, i);
+ TQS_RETURN_IF_ERROR(t, (ret != 0), "lcore disabled");
+
+ ret = rte_tqs_unregister_lcore(t, enabled_core_ids[0]);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL), "Valid lcore_id");
+
+ rte_tqs_free(t);
+ return 0;
+}
+
+/*
+ * rte_tqs_start: Trigger reader threads to count the number of times they enter
+ * the quiescent state
+ */
+static int
+test_tqs_start(void)
+{
+ struct rte_tqs *t;
+ const char *name = "TQS";
+ uint32_t token;
+ int i, ret;
+
+ printf("\nTest rte_tqs_start()\n");
+
+ t = rte_tqs_alloc(name, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Failed to alloc TQS variable");
+
+ for (i = 0; i < 3; i++) {
+ ret = rte_tqs_register_lcore(t, enabled_core_ids[i]);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL),
+ "Register lcore failed");
+ }
+
+ ret = rte_tqs_start(t, 1, &token);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL), "1 quiescent states");
+
+ rte_tqs_free(t);
+ return 0;
+}
+
+/*
+ * rte_tqs_check: Check if all reader threads have completed entering the
+ * quiescent state 'n' times
+ */
+static int
+test_tqs_check(void)
+{
+
+ struct rte_tqs *t;
+ const char *name = "TQS";
+ int i, ret;
+ uint32_t token;
+
+ printf("\nTest rte_tqs_check()\n");
+
+ t = rte_tqs_alloc(name, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Failed to alloc TQS variable");
+
+ ret = rte_tqs_start(t, 1, &token);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL), "TQS Start failed");
+
+ ret = rte_tqs_check(t, 0, true);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL), "Token = 0");
+
+ ret = rte_tqs_check(t, token, true);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL), "Blocking TQS check");
+
+ for (i = 0; i < 3; i++) {
+ ret = rte_tqs_register_lcore(t, enabled_core_ids[i]);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL),
+ "Register lcore failed");
+ }
+
+ ret = rte_tqs_check(t, token, false);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL), "Non-blocking TQS check");
+
+ rte_tqs_free(t);
+ return 0;
+}
+
+/*
+ * rte_tqs_lookup: Lookup a TQS variable by its name
+ */
+static int
+test_tqs_lookup(void)
+{
+ struct rte_tqs *t, *ret;
+ const char *name1 = "TQS";
+ const char *name2 = "NO_TQS";
+
+ printf("\nTest rte_tqs_lookup()\n");
+
+ t = rte_tqs_alloc(name1, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Failed to alloc TQS variable");
+
+ ret = rte_tqs_lookup(name1);
+ TQS_RETURN_IF_ERROR(t, (ret != t), "Allocated TQS variable name");
+
+ ret = rte_tqs_lookup(name2);
+ TQS_RETURN_IF_ERROR(t, (ret != NULL), "Unallocated TQS variable name");
+
+ rte_tqs_free(t);
+ return 0;
+}
+
+/*
+ * rte_tqs_list_dump: Dump the status of all TQS variables to a file
+ */
+static int
+test_tqs_list_dump(void)
+{
+ struct rte_tqs *t1, *t2;
+ const char name1[] = "TQS_1";
+ const char name2[] = "TQS_2";
+ int i, ret;
+
+ printf("\nTest rte_tqs_list_dump()\n");
+
+ /* File pointer NULL */
+ rte_tqs_list_dump(NULL);
+
+ /* Dump an empty list */
+ rte_tqs_list_dump(stdout);
+
+ t1 = rte_tqs_alloc(name1, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t1, (t1 == NULL), "Failed to alloc TQS variable");
+
+ /* Dump a list with TQS variable that has no readers */
+ rte_tqs_list_dump(stdout);
+
+ t2 = rte_tqs_alloc(name2, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t2, (t2 == NULL), "Failed to alloc TQS variable");
+
+ ret = rte_tqs_register_lcore(t1, enabled_core_ids[0]);
+ if (ret != 0) {
+ printf("ERROR file %s, line %d: Failed to register lcore\n",
+ __FILE__, __LINE__);
+ return -1;
+ }
+
+ for (i = 1; i < 3; i++) {
+ ret = rte_tqs_register_lcore(t2, enabled_core_ids[i]);
+ if (ret != 0) {
+ printf("ERROR file %s, line %d: Failed to register lcore\n",
+ __FILE__, __LINE__);
+ return -1;
+ }
+ }
+
+ rte_tqs_list_dump(stdout);
+
+ rte_tqs_free(t1);
+ rte_tqs_free(t2);
+
+ return 0;
+}
+
+/*
+ * rte_tqs_dump: Dump status of a single TQS variable to a file
+ */
+static int
+test_tqs_dump(void)
+{
+ struct rte_tqs *t1, *t2;
+ const char name1[] = "TQS_1";
+ const char name2[] = "TQS_2";
+ int i, ret;
+
+ printf("\nTest rte_tqs_dump()\n");
+
+ /* NULL TQS variable */
+ rte_tqs_dump(stdout, NULL);
+
+ t1 = rte_tqs_alloc(name1, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t1, (t1 == NULL), "Failed to alloc TQS variable");
+
+ /* NULL file pointer */
+ rte_tqs_dump(NULL, t1);
+
+ /* TQS variable with 0 core mask */
+ rte_tqs_dump(stdout, t1);
+
+ t2 = rte_tqs_alloc(name2, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t2, (t2 == NULL), "Failed to alloc TQS variable");
+
+ ret = rte_tqs_register_lcore(t1, enabled_core_ids[0]);
+ if (ret != 0) {
+ printf("ERROR file %s, line %d: Failed to register lcore\n",
+ __FILE__, __LINE__);
+ return -1;
+ }
+
+ for (i = 1; i < 3; i++) {
+ ret = rte_tqs_register_lcore(t2, enabled_core_ids[i]);
+ if (ret != 0) {
+ printf("ERROR file %s, line %d: Failed to register lcore\n",
+ __FILE__, __LINE__);
+ return -1;
+ }
+ }
+
+ rte_tqs_dump(stdout, t1);
+ rte_tqs_dump(stdout, t2);
+
+ return 0;
+}
+
+struct rte_hash *handle;
+static uint32_t *keys;
+#define TOTAL_ENTRY (1 * 8)
+#define COUNTER_VALUE 4096
+uint32_t *hash_data[TOTAL_ENTRY];
+uint8_t writer_done;
+static int
+
+test_tqs_reader(__attribute__((unused)) void *arg)
+{
+ struct rte_tqs *t;
+ int i, ret;
+ uint32_t lcore_id = rte_lcore_id();
+
+ t = rte_tqs_lookup("TQS");
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "TQS variable lookup failed");
+
+ do {
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ uint32_t *pdata;
+ ret = rte_hash_lookup_data(handle, keys+i,
+ (void **)&pdata);
+ if (ret != -ENOENT)
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+
+ /* Update quiescent state counter */
+ rte_tqs_update(t, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+init_hash(void)
+{
+ int i, ret;
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = "tests",
+ };
+
+ handle = rte_hash_create(&hash_params);
+ TQS_RETURN_IF_ERROR(NULL, (handle == NULL), "Hash create Failed");
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ TQS_RETURN_IF_ERROR(NULL, (hash_data[i] == NULL), "No memory");
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ TQS_RETURN_IF_ERROR(NULL, (keys == NULL), "No memory");
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ ret = rte_hash_add_key_data(handle, keys + i,
+ (void *)((uintptr_t)hash_data[i]));
+ TQS_RETURN_IF_ERROR(NULL, (ret < 0),
+ "Hash key add Failed #%d\n", i);
+ }
+ return 0;
+}
+
+/*
+ * Single writer, 1 TQS variable, 1 quiescent state
+ */
+static int
+test_tqs_sw_sv_1qs(void)
+{
+ struct rte_tqs *t;
+ const char *name = "TQS";
+ static uint32_t token;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("\nTest Single writer, 1 TQS variable, pass 1 quiescent state\n");
+
+ /* TQS variable is allocated */
+ t = rte_tqs_alloc(name, SOCKET_ID_ANY, 0);
+ TQS_RETURN_IF_ERROR(t, (t == NULL), "Failed to alloc TQS variable");
+
+ /* Register worker threads on 4 cores */
+ for (i = 0; i < 4; i++) {
+ ret = rte_tqs_register_lcore(t, enabled_core_ids[i]);
+ TQS_RETURN_IF_ERROR(t, (ret == -EINVAL),
+ "Register lcore failed");
+ }
+
+ /* Shared data structure created */
+ ret = init_hash();
+ TQS_RETURN_IF_ERROR(t, (ret != 0), "Hash init failed");
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_tqs_reader, NULL,
+ enabled_core_ids[i]);
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(handle, keys + i);
+ TQS_RETURN_IF_ERROR(t, (pos < 0), "Delete key failed #%d",
+ keys[i]);
+ /* Start the quiescent state query process */
+ ret = rte_tqs_start(t, num_1qs, &token);
+ TQS_RETURN_IF_ERROR(t, (ret != 0), "TQS Start Failed");
+
+ /* Check the quiescent state status */
+ rte_tqs_check(t, token, true);
+ TQS_RETURN_IF_ERROR(t, (*hash_data[i] != COUNTER_VALUE),
+ "Reader did not complete %d",
+ *hash_data[i]);
+
+ rte_hash_free_key_with_position(handle, pos);
+ TQS_RETURN_IF_ERROR(t, (ret < 0),
+ "Failed to free the key #%d", keys[i]);
+ rte_free(hash_data[i]);
+ }
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ /* Free TQS variable */
+ rte_tqs_free(t);
+ rte_hash_free(handle);
+ rte_free(keys);
+
+ return 0;
+}
+
+static int
+test_tqs_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_tqs_alloc() < 0)
+ goto test_fail;
+
+ if (test_tqs_register_lcore() < 0)
+ goto test_fail;
+
+ if (test_tqs_unregister_lcore() < 0)
+ goto test_fail;
+
+ if (test_tqs_start() < 0)
+ goto test_fail;
+
+ if (test_tqs_check() < 0)
+ goto test_fail;
+
+ if (test_tqs_lookup() < 0)
+ goto test_fail;
+
+ if (test_tqs_list_dump() < 0)
+ goto test_fail;
+
+ if (test_tqs_dump() < 0)
+ goto test_fail;
+
+
+ /* Functional test cases */
+ if (test_tqs_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ printf("\n");
+ return 0;
+
+ test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(tqs_autotest, test_tqs_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
[not found] ` <CGME20181122073110eucas1p17592400af6c0b807dc87e90d136575af@eucas1p1.samsung.com>
@ 2018-11-22 7:31 ` Ilya Maximets
0 siblings, 0 replies; 260+ messages in thread
From: Ilya Maximets @ 2018-11-22 7:31 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon, Ferruh Yigit
Hi.
Is the any differentiation points with liburcu [1] ?
Is there any profit having own implementation inside DPDK ?
[1] http://liburcu.org/
https://lwn.net/Articles/573424/
Best regards, Ilya Maximets.
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
> In the following paras, the term 'memory' refers to memory allocated
> by typical APIs like malloc or anything that is representative of
> memory, for ex: an index of a free element array.
>
> Since these data structures are lock less, the writers and readers
> are accessing the data structures simultaneously. Hence, while removing
> an element from a data structure, the writers cannot return the memory
> to the allocator, without knowing that the readers are not
> referencing that element/memory anymore. Hence, it is required to
> separate the operation of removing an element into 2 steps:
>
> Delete: in this step, the writer removes the element from the
> data structure but does not return the associated memory to the allocator.
> This will ensure that new readers will not get a reference to the removed
> element. Removing the reference is an atomic operation.
>
> Free: in this step, the writer returns the memory to the
> memory allocator, only after knowing that all the readers have stopped
> referencing the removed element.
>
> This library helps the writer determine when it is safe to free the
> memory.
>
> This library makes use of Thread Quiescent State (TQS). TQS can be
> defined as 'any point in the thread execution where the thread does
> not hold a reference to shared memory'. It is upto the application to
> determine its quiescent state. Let us consider the following diagram:
>
> Time -------------------------------------------------->
>
> | |
> RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
> | |
> RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
> | |
> RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
> | |
> |<--->|
> Del | Free
> |
> Cannot free memory
> during this period
>
> RTx - Reader thread
> < and > - Start and end of while(1) loop
> ***Dx*** - Reader thread is accessing the shared data structure Dx.
> i.e. critical section.
> +++ - Reader thread is not accessing any shared data structure.
> i.e. non critical section or quiescent state.
> Del - Point in time when the reference to the entry is removed using
> atomic operation.
> Free - Point in time when the writer can free the entry.
>
> As shown thread RT1 acesses data structures D1, D2 and D3. When it is
> accessing D2, if the writer has to remove an element from D2, the
> writer cannot return the memory associated with that element to the
> allocator. The writer can return the memory to the allocator only after
> the reader stops referencng D2. In other words, reader thread RT1
> has to enter a quiescent state.
>
> Similarly, since thread RT3 is also accessing D2, writer has to wait till
> RT3 enters quiescent state as well.
>
> However, the writer does not need to wait for RT2 to enter quiescent state.
> Thread RT2 was not accessing D2 when the delete operation happened.
> So, RT2 will not get a reference to the deleted entry.
>
> It can be noted that, the critical sections for D2 and D3 are quiescent states
> for D1. i.e. for a given data structure Dx, any point in the thread execution
> that does not reference Dx is a quiescent state.
>
> For DPDK applications, the start and end of while(1) loop (where no shared
> data structures are getting accessed) act as perfect quiescent states. This
> will combine all the shared data structure accesses into a single critical
> section and keeps the over head introduced by this library to the minimum.
>
> However, the length of the critical section and the number of reader threads
> is proportional to the time taken to identify the end of critical section.
> So, if the application desires, it should be possible to identify the end
> of critical section for each data structure.
>
> To provide the required flexibility, this library has a concept of TQS
> variable. The application can create one or more TQS variables to help it
> track the end of one or more critical sections.
>
> The application can create a TQS variable using the API rte_tqs_alloc.
> It takes a mask of lcore IDs that will report their quiescent states
> using this variable. This mask can be empty to start with.
>
> rte_tqs_register_lcore API will register a reader thread to report its
> quiescent state. This can be called from any control plane thread or from
> the reader thread. The application can create a TQS variable with no reader
> threads and add the threads dynamically using this API.
>
> The application can trigger the reader threads to report their quiescent
> state status by calling the API rte_tqs_start. It is possible for multiple
> writer threads to query the quiescent state status simultaneously. Hence,
> rte_tqs_start returns a token to each caller.
>
> The application has to call rte_tqs_check API with the token to get the
> current status. Option to block till all the threads enter the quiescent
> state is provided. If this API indicates that all the threads have entered
> the quiescent state, the application can free the deleted entry.
>
> The separation of triggering the reporting from querying the status provides
> the writer threads flexibility to do useful work instead of waiting for the
> reader threads to enter the quiescent state.
>
> rte_tqs_unregister_lcore API will remove a reader thread from reporting its
> quiescent state using a TQS variable. The rte_tqs_check API will not wait
> for this reader thread to report the quiescent state status anymore.
>
> Finally, a TQS variable can be deleted by calling rte_tqs_free API.
> Application must make sure that the reader threads are not referencing the
> TQS variable anymore before deleting it.
>
> The reader threads should call rte_tqs_update API to indicate that they
> entered a quiescent state. This API checks if a writer has triggered a
> quiescent state query and update the state accordingly.
>
> Next Steps:
> 1) Add more test cases
> 2) Convert to patch
> 3) Incorporate feedback from community
> 4) Add documentation
>
> Dharmik Thakkar (1):
> test/tqs: Add API and functional tests
>
> Honnappa Nagarahalli (2):
> log: add TQS log type
> tqs: add thread quiescent state library
>
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_eal/common/include/rte_log.h | 1 +
> lib/librte_tqs/Makefile | 23 +
> lib/librte_tqs/meson.build | 5 +
> lib/librte_tqs/rte_tqs.c | 249 +++++++++++
> lib/librte_tqs/rte_tqs.h | 352 +++++++++++++++
> lib/librte_tqs/rte_tqs_version.map | 16 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> test/test/Makefile | 2 +
> test/test/autotest_data.py | 6 +
> test/test/meson.build | 5 +-
> test/test/test_tqs.c | 540 ++++++++++++++++++++++++
> 14 files changed, 1208 insertions(+), 2 deletions(-)
> create mode 100644 lib/librte_tqs/Makefile
> create mode 100644 lib/librte_tqs/meson.build
> create mode 100644 lib/librte_tqs/rte_tqs.c
> create mode 100644 lib/librte_tqs/rte_tqs.h
> create mode 100644 lib/librte_tqs/rte_tqs_version.map
> create mode 100644 test/test/test_tqs.c
>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-11-22 3:30 ` [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library Honnappa Nagarahalli
@ 2018-11-24 12:18 ` Ananyev, Konstantin
2018-11-27 21:32 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2018-11-24 12:18 UTC (permalink / raw)
To: 'Honnappa Nagarahalli', dev
Cc: nd, dharmik.thakkar, malvika.gupta, gavin.hu
Hi Honnappa,
> +
> +/* Allocate a new TQS variable with the name *name* in memory. */
> +struct rte_tqs * __rte_experimental
> +rte_tqs_alloc(const char *name, int socket_id, uint64_t lcore_mask)
> +{
> + char tqs_name[RTE_TQS_NAMESIZE];
> + struct rte_tailq_entry *te, *tmp_te;
> + struct rte_tqs_list *tqs_list;
> + struct rte_tqs *v, *tmp_v;
> + int ret;
> +
> + if (name == NULL) {
> + RTE_LOG(ERR, TQS, "Invalid input parameters\n");
> + rte_errno = -EINVAL;
> + return NULL;
> + }
> +
> + te = rte_zmalloc("TQS_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL) {
> + RTE_LOG(ERR, TQS, "Cannot reserve memory for tailq\n");
> + rte_errno = -ENOMEM;
> + return NULL;
> + }
> +
> + snprintf(tqs_name, sizeof(tqs_name), "%s", name);
> + v = rte_zmalloc_socket(tqs_name, sizeof(struct rte_tqs),
> + RTE_CACHE_LINE_SIZE, socket_id);
> + if (v == NULL) {
> + RTE_LOG(ERR, TQS, "Cannot reserve memory for TQS variable\n");
> + rte_errno = -ENOMEM;
> + goto alloc_error;
> + }
> +
> + ret = snprintf(v->name, sizeof(v->name), "%s", name);
> + if (ret < 0 || ret >= (int)sizeof(v->name)) {
> + rte_errno = -ENAMETOOLONG;
> + goto alloc_error;
> + }
> +
> + te->data = (void *) v;
> + v->lcore_mask = lcore_mask;
> +
> + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> +
> + tqs_list = RTE_TAILQ_CAST(rte_tqs_tailq.head, rte_tqs_list);
> +
> + /* Search if a TQS variable with the same name exists already */
> + TAILQ_FOREACH(tmp_te, tqs_list, next) {
> + tmp_v = (struct rte_tqs *) tmp_te->data;
> + if (strncmp(name, tmp_v->name, RTE_TQS_NAMESIZE) == 0)
> + break;
> + }
> +
> + if (tmp_te != NULL) {
> + rte_errno = -EEXIST;
> + goto tqs_exist;
> + }
> +
> + TAILQ_INSERT_TAIL(tqs_list, te, next);
> +
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + return v;
> +
> +tqs_exist:
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> +alloc_error:
> + rte_free(te);
> + rte_free(v);
> + return NULL;
> +}
That seems quite heavy-weight function just to allocate sync variable.
As size of struct rte_tqs is constant and known to the user, might be better
just provide rte_tqs_init(struct rte_tqs *tqs, ...) and let user allocate/free
memory for it by himself.
> +
> +/* Add a reader thread, running on an lcore, to the list of threads
> + * reporting their quiescent state on a TQS variable.
> + */
> +int __rte_experimental
> +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id)
> +{
> + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >= RTE_TQS_MAX_LCORE),
> + -EINVAL);
It is not very good practice to make function return different values and behave
in a different way in debug/non-debug mode.
I'd say that for slow-path (functions in .c) it is always good to check input parameters.
For fast-path (functions in .h) we sometimes skip such checking,
but debug mode can probably use RTE_ASSERT() or so.
lcore_id >= RTE_TQS_MAX_LCORE
Is this limitation really necessary?
First it means that only lcores can use that API (at least data-path part), second
even today many machines have more than 64 cores.
I think you can easily avoid such limitation, if instead of requiring lcore_id as
input parameter, you'll just make it return index of next available entry in w[].
Then tqs_update() can take that index as input parameter.
> +
> + /* Worker thread has to count the quiescent states
> + * only from the current value of token.
> + */
> + v->w[lcore_id].cnt = v->token;
Wonder what would happen, if new reader will
call register(), after writer calls start()?
Looks like a race-condition.
Or such pattern is not supported?
> +
> + /* Release the store to initial TQS count so that workers
> + * can use it immediately after this function returns.
> + */
> + __atomic_fetch_or(&v->lcore_mask, (1UL << lcore_id), __ATOMIC_RELEASE);
> +
> + return 0;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Trigger the worker threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * @param v
> + * TQS variable
> + * @param n
> + * Expected number of times the quiescent state is entered
> + * @param t
> + * - If successful, this is the token for this call of the API.
> + * This should be passed to rte_tqs_check API.
> + * @return
> + * - -EINVAL if the parameters are invalid (debug mode compilation only).
> + * - 0 Otherwise and always (non-debug mode compilation).
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_tqs_start(struct rte_tqs *v, unsigned int n, uint32_t *t)
> +{
> + TQS_RETURN_IF_TRUE((v == NULL || t == NULL), -EINVAL);
> +
> + /* This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE);
> +
> + return 0;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for the worker thread on a lcore.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the worker threads registered to report their quiescent state
> + * on the TQS variable must call this API.
> + *
> + * @param v
> + * TQS variable
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_tqs_update(struct rte_tqs *v, unsigned int lcore_id)
> +{
> + uint32_t t;
> +
> + TQS_ERR_LOG_IF_TRUE(v == NULL || lcore_id >= RTE_TQS_MAX_LCORE);
> +
> + /* Load the token before the worker thread loads any other
> + * (lock-free) data structure. This ensures that updates
> + * to the data structures are visible if the update
> + * to token is visible.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
Hmm, I am not very familiar with C11 model, but it looks like a race
condition to me:
as I understand, update() supposed be called at the end of reader's
critical section, correct?
But ACQUIRE is only a hoist barrier, which means compiler and cpu
are free to move earlier reads (and writes) after it.
It probably needs to be a full ACQ_REL here.
> + if (v->w[lcore_id].cnt != t)
> + v->w[lcore_id].cnt++;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the worker threads have entered the quiescent state
> + * 'n' number of times. 'n' is provided in rte_tqs_start API.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * @param v
> + * TQS variable
> + * @param t
> + * Token returned by rte_tqs_start API
> + * @param wait
> + * If true, block till all the worker threads have completed entering
> + * the quiescent state 'n' number of times
> + * @return
> + * - 0 if all worker threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all worker threads have passed through specified number
> + * of quiescent states.
> + * - -EINVAL if the parameters are invalid (debug mode compilation only).
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_tqs_check(struct rte_tqs *v, uint32_t t, bool wait)
> +{
> + uint64_t l;
> + uint64_t lcore_mask;
> +
> + TQS_RETURN_IF_TRUE((v == NULL), -EINVAL);
> +
> + do {
> + /* Load the current lcore_mask before loading the
> + * worker thread quiescent state counters.
> + */
> + lcore_mask = __atomic_load_n(&v->lcore_mask, __ATOMIC_ACQUIRE);
What would happen if reader will call unregister() simultaneously
with check() and will update lcore_mask straight after that load?
As I understand check() might hang in such case.
> +
> + while (lcore_mask) {
> + l = __builtin_ctz(lcore_mask);
> + if (v->w[l].cnt != t)
> + break;
As I understand, that makes control-path function progress dependent
on simultaneous invocation of data-path functions.
In some cases that might cause control-path to hang.
Let say if data-path function wouldn't be called, or user invokes
control-path and data-path functions from the same thread.
> +
> + lcore_mask &= ~(1UL << l);
> + }
> +
> + if (lcore_mask == 0)
> + return 1;
> +
> + rte_pause();
> + } while (wait);
> +
> + return 0;
> +}
> +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-11-24 12:18 ` Ananyev, Konstantin
@ 2018-11-27 21:32 ` Honnappa Nagarahalli
2018-11-28 15:25 ` Ananyev, Konstantin
0 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-27 21:32 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
Honnappa Nagarahalli, nd
>
> Hi Honnappa,
Thank you for reviewing the patch, appreciate your comments.
>
> > +
> > +/* Allocate a new TQS variable with the name *name* in memory. */
> > +struct rte_tqs * __rte_experimental rte_tqs_alloc(const char *name,
> > +int socket_id, uint64_t lcore_mask) {
> > + char tqs_name[RTE_TQS_NAMESIZE];
> > + struct rte_tailq_entry *te, *tmp_te;
> > + struct rte_tqs_list *tqs_list;
> > + struct rte_tqs *v, *tmp_v;
> > + int ret;
> > +
> > + if (name == NULL) {
> > + RTE_LOG(ERR, TQS, "Invalid input parameters\n");
> > + rte_errno = -EINVAL;
> > + return NULL;
> > + }
> > +
> > + te = rte_zmalloc("TQS_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL) {
> > + RTE_LOG(ERR, TQS, "Cannot reserve memory for tailq\n");
> > + rte_errno = -ENOMEM;
> > + return NULL;
> > + }
> > +
> > + snprintf(tqs_name, sizeof(tqs_name), "%s", name);
> > + v = rte_zmalloc_socket(tqs_name, sizeof(struct rte_tqs),
> > + RTE_CACHE_LINE_SIZE, socket_id);
> > + if (v == NULL) {
> > + RTE_LOG(ERR, TQS, "Cannot reserve memory for TQS
> variable\n");
> > + rte_errno = -ENOMEM;
> > + goto alloc_error;
> > + }
> > +
> > + ret = snprintf(v->name, sizeof(v->name), "%s", name);
> > + if (ret < 0 || ret >= (int)sizeof(v->name)) {
> > + rte_errno = -ENAMETOOLONG;
> > + goto alloc_error;
> > + }
> > +
> > + te->data = (void *) v;
> > + v->lcore_mask = lcore_mask;
> > +
> > + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> > +
> > + tqs_list = RTE_TAILQ_CAST(rte_tqs_tailq.head, rte_tqs_list);
> > +
> > + /* Search if a TQS variable with the same name exists already */
> > + TAILQ_FOREACH(tmp_te, tqs_list, next) {
> > + tmp_v = (struct rte_tqs *) tmp_te->data;
> > + if (strncmp(name, tmp_v->name, RTE_TQS_NAMESIZE) == 0)
> > + break;
> > + }
> > +
> > + if (tmp_te != NULL) {
> > + rte_errno = -EEXIST;
> > + goto tqs_exist;
> > + }
> > +
> > + TAILQ_INSERT_TAIL(tqs_list, te, next);
> > +
> > + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> > +
> > + return v;
> > +
> > +tqs_exist:
> > + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> > +
> > +alloc_error:
> > + rte_free(te);
> > + rte_free(v);
> > + return NULL;
> > +}
>
> That seems quite heavy-weight function just to allocate sync variable.
> As size of struct rte_tqs is constant and known to the user, might be better just
> provide rte_tqs_init(struct rte_tqs *tqs, ...) and let user allocate/free memory
> for it by himself.
>
I believe, when you say heavy-weight, you are referring to adding tqs variable to the TAILQ and allocating the memory for it. Agree. I also do not expect that there are a whole lot of tqs variables used in an application. Even in rte_tqs_free, there is similar overhead.
The extra part is due to the way the TQS variable will get identified by data plane threads. I am thinking that a data plane thread will use the rte_tqs_lookup API to identify a TQS variable. However, it is possible to share this with data plane threads via a simple shared structure as well.
Along with not allocating the memory, are you suggesting that we could skip maintaining a list of TQS variables in the TAILQ? This will remove rte_tqs_lookup, rte_tqs_free, rte_tqs_list_dump APIs. I am fine with this approach.
> > +
> > +/* Add a reader thread, running on an lcore, to the list of threads
> > + * reporting their quiescent state on a TQS variable.
> > + */
> > +int __rte_experimental
> > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> RTE_TQS_MAX_LCORE),
> > + -EINVAL);
>
> It is not very good practice to make function return different values and behave
> in a different way in debug/non-debug mode.
> I'd say that for slow-path (functions in .c) it is always good to check input
> parameters.
> For fast-path (functions in .h) we sometimes skip such checking, but debug
> mode can probably use RTE_ASSERT() or so.
Makes sense, I will change this in the next version.
>
>
> lcore_id >= RTE_TQS_MAX_LCORE
>
> Is this limitation really necessary?
I added this limitation because currently DPDK application cannot take a mask more than 64bit wide. Otherwise, this should be as big as RTE_MAX_LCORE.
I see that in the case of '-lcores' option, the number of lcores can be more than the number of PEs. In this case, we still need a MAX limit (but can be bigger than 64).
> First it means that only lcores can use that API (at least data-path part), second
> even today many machines have more than 64 cores.
> I think you can easily avoid such limitation, if instead of requiring lcore_id as
> input parameter, you'll just make it return index of next available entry in w[].
> Then tqs_update() can take that index as input parameter.
I had thought about a similar approach based on IDs. I was concerned that ID will be one more thing to manage for the application. But, I see the limitations of the current approach now. I will change it to allocation based. This will support even non-EAL pthreads as well.
>
> > +
>
> > + /* Worker thread has to count the quiescent states
> > + * only from the current value of token.
> > + */
> > + v->w[lcore_id].cnt = v->token;
>
> Wonder what would happen, if new reader will call register(), after writer calls
> start()?
> Looks like a race-condition.
> Or such pattern is not supported?
The start should be called only after the reference to the entry in the data structure is 'deleted'. Hence the new reader will not get the reference to the deleted entry and does not have to increment its counter. When rte_tqs_check is called, it will see that the counter is already up to date. (I am missing a load-acquire on the token, I will correct that in the next version).
>
> > +
> > + /* Release the store to initial TQS count so that workers
> > + * can use it immediately after this function returns.
> > + */
> > + __atomic_fetch_or(&v->lcore_mask, (1UL << lcore_id),
> > +__ATOMIC_RELEASE);
> > +
> > + return 0;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Trigger the worker threads to report the quiescent state
> > + * status.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from the worker threads as well.
> > + *
> > + * @param v
> > + * TQS variable
> > + * @param n
> > + * Expected number of times the quiescent state is entered
> > + * @param t
> > + * - If successful, this is the token for this call of the API.
> > + * This should be passed to rte_tqs_check API.
> > + * @return
> > + * - -EINVAL if the parameters are invalid (debug mode compilation only).
> > + * - 0 Otherwise and always (non-debug mode compilation).
> > + */
> > +static __rte_always_inline int __rte_experimental
> > +rte_tqs_start(struct rte_tqs *v, unsigned int n, uint32_t *t) {
> > + TQS_RETURN_IF_TRUE((v == NULL || t == NULL), -EINVAL);
> > +
> > + /* This store release will ensure that changes to any data
> > + * structure are visible to the workers before the token
> > + * update is visible.
> > + */
> > + *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE);
> > +
> > + return 0;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Update quiescent state for the worker thread on a lcore.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * All the worker threads registered to report their quiescent state
> > + * on the TQS variable must call this API.
> > + *
> > + * @param v
> > + * TQS variable
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_tqs_update(struct rte_tqs *v, unsigned int lcore_id) {
> > + uint32_t t;
> > +
> > + TQS_ERR_LOG_IF_TRUE(v == NULL || lcore_id >=
> RTE_TQS_MAX_LCORE);
> > +
> > + /* Load the token before the worker thread loads any other
> > + * (lock-free) data structure. This ensures that updates
> > + * to the data structures are visible if the update
> > + * to token is visible.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
>
> Hmm, I am not very familiar with C11 model, but it looks like a race condition
> to me:
> as I understand, update() supposed be called at the end of reader's critical
> section, correct?
Yes, the understanding is correct.
> But ACQUIRE is only a hoist barrier, which means compiler and cpu are free to
> move earlier reads (and writes) after it.
Yes, your understanding is correct.
> It probably needs to be a full ACQ_REL here.
>
The sequence of operations is as follows:
1) Writer 'deletes' an entry from a lock-free data structure
2) Writer calls rte_tqs_start - This API increments the 'token' and does a store-release. So, any earlier stores would be visible if the store to 'token' is visible (to the data plane threads).
3) Reader calls rte_tqs_update - This API load-acquires the 'token'.
a) If this 'token' is the updated value from 2) then the entry deleted from 1) will not be available for the reader to reference (even if that reference is due to earlier reads being moved after load-acquire of 'token').
b) If this 'token' is not the updated value from 2) then the entry deleted from 1) may or may not be available for the reader to reference. In this case the w[lcore_id].cnt is not updated, hence the writer will wait to 'free' the deleted entry from 1)
> > + if (v->w[lcore_id].cnt != t)
> > + v->w[lcore_id].cnt++;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Checks if all the worker threads have entered the quiescent state
> > + * 'n' number of times. 'n' is provided in rte_tqs_start API.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from the worker threads as well.
> > + *
> > + * @param v
> > + * TQS variable
> > + * @param t
> > + * Token returned by rte_tqs_start API
> > + * @param wait
> > + * If true, block till all the worker threads have completed entering
> > + * the quiescent state 'n' number of times
> > + * @return
> > + * - 0 if all worker threads have NOT passed through specified number
> > + * of quiescent states.
> > + * - 1 if all worker threads have passed through specified number
> > + * of quiescent states.
> > + * - -EINVAL if the parameters are invalid (debug mode compilation only).
> > + */
> > +static __rte_always_inline int __rte_experimental
> > +rte_tqs_check(struct rte_tqs *v, uint32_t t, bool wait) {
> > + uint64_t l;
> > + uint64_t lcore_mask;
> > +
> > + TQS_RETURN_IF_TRUE((v == NULL), -EINVAL);
> > +
> > + do {
> > + /* Load the current lcore_mask before loading the
> > + * worker thread quiescent state counters.
> > + */
> > + lcore_mask = __atomic_load_n(&v->lcore_mask,
> __ATOMIC_ACQUIRE);
>
> What would happen if reader will call unregister() simultaneously with check()
> and will update lcore_mask straight after that load?
> As I understand check() might hang in such case.
If the 'lcore_mask' is updated after this load, it will affect only the current iteration of the while loop below. In the next iteration the 'lcore_mask' is loaded again.
>
> > +
> > + while (lcore_mask) {
> > + l = __builtin_ctz(lcore_mask);
> > + if (v->w[l].cnt != t)
> > + break;
>
> As I understand, that makes control-path function progress dependent on
> simultaneous invocation of data-path functions.
I agree that the control-path function progress (for ex: how long to wait for freeing the memory) depends on invocation of the data-path functions. The separation of 'start', 'check' and the option not to block in 'check' provide the flexibility for control-path to do some other work if it chooses to.
> In some cases that might cause control-path to hang.
> Let say if data-path function wouldn't be called, or user invokes control-path
> and data-path functions from the same thread.
I agree with the case of data-path function not getting called. I would consider that as programming error. I can document that warning in the rte_tqs_check API.
In the case of same thread calling both control-path and data-path functions, it would depend on the sequence of the calls. The following sequence should not cause any hangs:
Worker thread
1) 'deletes' an entry from a lock-free data structure
2) rte_tqs_start
3) rte_tqs_update
4) rte_tqs_check (wait == 1 or wait == 0)
5) 'free' the entry deleted in 1)
If 3) and 4) are interchanged, then there will be a hang if wait is set to 1. If wait is set to 0, there should not be a hang.
I can document this as part of the documentation (I do not think API documentation is required for this).
>
> > +
> > + lcore_mask &= ~(1UL << l);
> > + }
> > +
> > + if (lcore_mask == 0)
> > + return 1;
> > +
> > + rte_pause();
> > + } while (wait);
> > +
> > + return 0;
> > +}
> > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 1/3] log: add TQS log type
2018-11-22 3:30 ` [dpdk-dev] [RFC 1/3] log: add TQS log type Honnappa Nagarahalli
@ 2018-11-27 22:24 ` Stephen Hemminger
2018-11-28 5:58 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Stephen Hemminger @ 2018-11-27 22:24 UTC (permalink / raw)
To: Honnappa Nagarahalli; +Cc: dev, nd, dharmik.thakkar, malvika.gupta, gavin.hu
On Wed, 21 Nov 2018 21:30:53 -0600
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> ---
> lib/librte_eal/common/include/rte_log.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
> index 2f789cb90..b4e91a4a5 100644
> --- a/lib/librte_eal/common/include/rte_log.h
> +++ b/lib/librte_eal/common/include/rte_log.h
> @@ -61,6 +61,7 @@ extern struct rte_logs rte_logs;
> #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */
> #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */
> #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */
> +#define RTE_LOGTYPE_TQS 21 /**< Log related to Thread Quiescent State. */
>
> /* these log types can be used in an application */
> #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */
Sorry, I don't think this is the right way now.
All new logging should be using dynamic log types.
We should work on getting rid of others (EFD, EVENTDEV, GSO).
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (3 preceding siblings ...)
[not found] ` <CGME20181122073110eucas1p17592400af6c0b807dc87e90d136575af@eucas1p1.samsung.com>
@ 2018-11-27 22:28 ` Stephen Hemminger
2018-11-27 22:49 ` Van Haaren, Harry
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 0/2] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (9 subsequent siblings)
14 siblings, 1 reply; 260+ messages in thread
From: Stephen Hemminger @ 2018-11-27 22:28 UTC (permalink / raw)
To: Honnappa Nagarahalli; +Cc: dev, nd, dharmik.thakkar, malvika.gupta, gavin.hu
On Wed, 21 Nov 2018 21:30:52 -0600
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
> In the following paras, the term 'memory' refers to memory allocated
> by typical APIs like malloc or anything that is representative of
> memory, for ex: an index of a free element array.
>
> Since these data structures are lock less, the writers and readers
> are accessing the data structures simultaneously. Hence, while removing
> an element from a data structure, the writers cannot return the memory
> to the allocator, without knowing that the readers are not
> referencing that element/memory anymore. Hence, it is required to
> separate the operation of removing an element into 2 steps:
>
> Delete: in this step, the writer removes the element from the
> data structure but does not return the associated memory to the allocator.
> This will ensure that new readers will not get a reference to the removed
> element. Removing the reference is an atomic operation.
>
> Free: in this step, the writer returns the memory to the
> memory allocator, only after knowing that all the readers have stopped
> referencing the removed element.
>
> This library helps the writer determine when it is safe to free the
> memory.
>
> This library makes use of Thread Quiescent State (TQS). TQS can be
> defined as 'any point in the thread execution where the thread does
> not hold a reference to shared memory'. It is upto the application to
> determine its quiescent state. Let us consider the following diagram:
>
> Time -------------------------------------------------->
>
> | |
> RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
> | |
> RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
> | |
> RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
> | |
> |<--->|
> Del | Free
> |
> Cannot free memory
> during this period
>
> RTx - Reader thread
> < and > - Start and end of while(1) loop
> ***Dx*** - Reader thread is accessing the shared data structure Dx.
> i.e. critical section.
> +++ - Reader thread is not accessing any shared data structure.
> i.e. non critical section or quiescent state.
> Del - Point in time when the reference to the entry is removed using
> atomic operation.
> Free - Point in time when the writer can free the entry.
>
> As shown thread RT1 acesses data structures D1, D2 and D3. When it is
> accessing D2, if the writer has to remove an element from D2, the
> writer cannot return the memory associated with that element to the
> allocator. The writer can return the memory to the allocator only after
> the reader stops referencng D2. In other words, reader thread RT1
> has to enter a quiescent state.
>
> Similarly, since thread RT3 is also accessing D2, writer has to wait till
> RT3 enters quiescent state as well.
>
> However, the writer does not need to wait for RT2 to enter quiescent state.
> Thread RT2 was not accessing D2 when the delete operation happened.
> So, RT2 will not get a reference to the deleted entry.
>
> It can be noted that, the critical sections for D2 and D3 are quiescent states
> for D1. i.e. for a given data structure Dx, any point in the thread execution
> that does not reference Dx is a quiescent state.
>
> For DPDK applications, the start and end of while(1) loop (where no shared
> data structures are getting accessed) act as perfect quiescent states. This
> will combine all the shared data structure accesses into a single critical
> section and keeps the over head introduced by this library to the minimum.
>
> However, the length of the critical section and the number of reader threads
> is proportional to the time taken to identify the end of critical section.
> So, if the application desires, it should be possible to identify the end
> of critical section for each data structure.
>
> To provide the required flexibility, this library has a concept of TQS
> variable. The application can create one or more TQS variables to help it
> track the end of one or more critical sections.
>
> The application can create a TQS variable using the API rte_tqs_alloc.
> It takes a mask of lcore IDs that will report their quiescent states
> using this variable. This mask can be empty to start with.
>
> rte_tqs_register_lcore API will register a reader thread to report its
> quiescent state. This can be called from any control plane thread or from
> the reader thread. The application can create a TQS variable with no reader
> threads and add the threads dynamically using this API.
>
> The application can trigger the reader threads to report their quiescent
> state status by calling the API rte_tqs_start. It is possible for multiple
> writer threads to query the quiescent state status simultaneously. Hence,
> rte_tqs_start returns a token to each caller.
>
> The application has to call rte_tqs_check API with the token to get the
> current status. Option to block till all the threads enter the quiescent
> state is provided. If this API indicates that all the threads have entered
> the quiescent state, the application can free the deleted entry.
>
> The separation of triggering the reporting from querying the status provides
> the writer threads flexibility to do useful work instead of waiting for the
> reader threads to enter the quiescent state.
>
> rte_tqs_unregister_lcore API will remove a reader thread from reporting its
> quiescent state using a TQS variable. The rte_tqs_check API will not wait
> for this reader thread to report the quiescent state status anymore.
>
> Finally, a TQS variable can be deleted by calling rte_tqs_free API.
> Application must make sure that the reader threads are not referencing the
> TQS variable anymore before deleting it.
>
> The reader threads should call rte_tqs_update API to indicate that they
> entered a quiescent state. This API checks if a writer has triggered a
> quiescent state query and update the state accordingly.
>
> Next Steps:
> 1) Add more test cases
> 2) Convert to patch
> 3) Incorporate feedback from community
> 4) Add documentation
>
> Dharmik Thakkar (1):
> test/tqs: Add API and functional tests
>
> Honnappa Nagarahalli (2):
> log: add TQS log type
> tqs: add thread quiescent state library
>
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_eal/common/include/rte_log.h | 1 +
> lib/librte_tqs/Makefile | 23 +
> lib/librte_tqs/meson.build | 5 +
> lib/librte_tqs/rte_tqs.c | 249 +++++++++++
> lib/librte_tqs/rte_tqs.h | 352 +++++++++++++++
> lib/librte_tqs/rte_tqs_version.map | 16 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> test/test/Makefile | 2 +
> test/test/autotest_data.py | 6 +
> test/test/meson.build | 5 +-
> test/test/test_tqs.c | 540 ++++++++++++++++++++++++
> 14 files changed, 1208 insertions(+), 2 deletions(-)
> create mode 100644 lib/librte_tqs/Makefile
> create mode 100644 lib/librte_tqs/meson.build
> create mode 100644 lib/librte_tqs/rte_tqs.c
> create mode 100644 lib/librte_tqs/rte_tqs.h
> create mode 100644 lib/librte_tqs/rte_tqs_version.map
> create mode 100644 test/test/test_tqs.c
>
Mixed feelings about this one.
Love to see RCU used for more things since it is much better than reader/writer
locks for many applications. But hate to see DPDK reinventing every other library
and not reusing code. Userspace RCU https://liburcu.org/ is widely supported by
distro's, more throughly tested and documented, and more flexiple.
The issue with many of these reinventions is a tradeoff of DPDK growing
another dependency on external library versus using common code.
For RCU, the big issue for me is the testing and documentation of how to do RCU
safely. Many people get it wrong!
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-27 22:28 ` Stephen Hemminger
@ 2018-11-27 22:49 ` Van Haaren, Harry
2018-11-28 5:31 ` Honnappa Nagarahalli
2018-11-30 2:25 ` Honnappa Nagarahalli
0 siblings, 2 replies; 260+ messages in thread
From: Van Haaren, Harry @ 2018-11-27 22:49 UTC (permalink / raw)
To: Stephen Hemminger, Honnappa Nagarahalli
Cc: dev, nd, dharmik.thakkar, malvika.gupta, gavin.hu
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> Sent: Tuesday, November 27, 2018 2:28 PM
> To: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Cc: dev@dpdk.org; nd@arm.com; dharmik.thakkar@arm.com; malvika.gupta@arm.com;
> gavin.hu@arm.com
> Subject: Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
>
> On Wed, 21 Nov 2018 21:30:52 -0600
> Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
>
> > Lock-less data structures provide scalability and determinism.
> > They enable use cases where locking may not be allowed
> > (for ex: real-time applications).
> >
<snip detailed RFC commit messag>
> > Dharmik Thakkar (1):
> > test/tqs: Add API and functional tests
> >
> > Honnappa Nagarahalli (2):
> > log: add TQS log type
> > tqs: add thread quiescent state library
> >
> > config/common_base | 6 +
> > lib/Makefile | 2 +
> > lib/librte_eal/common/include/rte_log.h | 1 +
> > lib/librte_tqs/Makefile | 23 +
> > lib/librte_tqs/meson.build | 5 +
> > lib/librte_tqs/rte_tqs.c | 249 +++++++++++
> > lib/librte_tqs/rte_tqs.h | 352 +++++++++++++++
> > lib/librte_tqs/rte_tqs_version.map | 16 +
> > lib/meson.build | 2 +-
> > mk/rte.app.mk | 1 +
> > test/test/Makefile | 2 +
> > test/test/autotest_data.py | 6 +
> > test/test/meson.build | 5 +-
> > test/test/test_tqs.c | 540 ++++++++++++++++++++++++
> > 14 files changed, 1208 insertions(+), 2 deletions(-)
> > create mode 100644 lib/librte_tqs/Makefile
> > create mode 100644 lib/librte_tqs/meson.build
> > create mode 100644 lib/librte_tqs/rte_tqs.c
> > create mode 100644 lib/librte_tqs/rte_tqs.h
> > create mode 100644 lib/librte_tqs/rte_tqs_version.map
> > create mode 100644 test/test/test_tqs.c
> >
>
> Mixed feelings about this one.
>
> Love to see RCU used for more things since it is much better than
> reader/writer
> locks for many applications. But hate to see DPDK reinventing every other
> library
> and not reusing code. Userspace RCU https://liburcu.org/ is widely supported
> by
> distro's, more throughly tested and documented, and more flexiple.
>
> The issue with many of these reinventions is a tradeoff of DPDK growing
> another dependency on external library versus using common code.
>
> For RCU, the big issue for me is the testing and documentation of how to do
> RCU
> safely. Many people get it wrong!
Some notes on liburcu (and my amateur understanding of LGPL, I'm not a license lawyer :)
Liburcu is LGPL, which AFAIK means we must dynamically link applications if the application code is BSD or other permissive licenses.
The side effect of this is that urcu function calls must be "real" function calls and inlining them is not possible. Therefore using liburcu in LGPL mode could have a performance impact in this case. I expect estimating the performance cost would be
difficult as its pretty much a case-by-case depending on what you're doing in the surrounding code.
Generally I'm in favour of using established libraries (particularly for complex functionality like RCU) but in this case I think there's a tradeoff with raw performance.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-27 22:49 ` Van Haaren, Harry
@ 2018-11-28 5:31 ` Honnappa Nagarahalli
2018-11-28 23:23 ` Stephen Hemminger
2018-11-30 2:25 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-28 5:31 UTC (permalink / raw)
To: Van Haaren, Harry, Stephen Hemminger
Cc: dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
Honnappa Nagarahalli, nd
> >
> > On Wed, 21 Nov 2018 21:30:52 -0600
> > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> >
> > > Lock-less data structures provide scalability and determinism.
> > > They enable use cases where locking may not be allowed (for ex:
> > > real-time applications).
> > >
>
> <snip detailed RFC commit messag>
>
> > > Dharmik Thakkar (1):
> > > test/tqs: Add API and functional tests
> > >
> > > Honnappa Nagarahalli (2):
> > > log: add TQS log type
> > > tqs: add thread quiescent state library
> > >
> > > config/common_base | 6 +
> > > lib/Makefile | 2 +
> > > lib/librte_eal/common/include/rte_log.h | 1 +
> > > lib/librte_tqs/Makefile | 23 +
> > > lib/librte_tqs/meson.build | 5 +
> > > lib/librte_tqs/rte_tqs.c | 249 +++++++++++
> > > lib/librte_tqs/rte_tqs.h | 352 +++++++++++++++
> > > lib/librte_tqs/rte_tqs_version.map | 16 +
> > > lib/meson.build | 2 +-
> > > mk/rte.app.mk | 1 +
> > > test/test/Makefile | 2 +
> > > test/test/autotest_data.py | 6 +
> > > test/test/meson.build | 5 +-
> > > test/test/test_tqs.c | 540 ++++++++++++++++++++++++
> > > 14 files changed, 1208 insertions(+), 2 deletions(-) create mode
> > > 100644 lib/librte_tqs/Makefile create mode 100644
> > > lib/librte_tqs/meson.build create mode 100644
> > > lib/librte_tqs/rte_tqs.c create mode 100644
> > > lib/librte_tqs/rte_tqs.h create mode 100644
> > > lib/librte_tqs/rte_tqs_version.map
> > > create mode 100644 test/test/test_tqs.c
> > >
> >
> > Mixed feelings about this one.
> >
> > Love to see RCU used for more things since it is much better than
> > reader/writer locks for many applications. But hate to see DPDK
> > reinventing every other library and not reusing code. Userspace RCU
> > https://liburcu.org/ is widely supported by distro's, more throughly
> > tested and documented, and more flexiple.
> >
> > The issue with many of these reinventions is a tradeoff of DPDK
> > growing another dependency on external library versus using common code.
> >
Agree with the dependency issues. Sometimes flexibility also causes confusion and features that are not necessarily required for a targeted use case. I have seen that much of the functionality that can be left to the application is implemented as part of the library.
I think having it in DPDK will give us control over the amount of capability this library will have and freedom over changes we would like to make to such a library. I also view DPDK as one package where all things required for data plane development are available.
> > For RCU, the big issue for me is the testing and documentation of how
> > to do RCU safely. Many people get it wrong!
Hopefully, we all will do a better job collectively :)
>
>
> Some notes on liburcu (and my amateur understanding of LGPL, I'm not a
> license lawyer :)
>
> Liburcu is LGPL, which AFAIK means we must dynamically link applications if
> the application code is BSD or other permissive licenses.
>
> The side effect of this is that urcu function calls must be "real" function calls
> and inlining them is not possible. Therefore using liburcu in LGPL mode could
> have a performance impact in this case. I expect estimating the performance
> cost would be
> difficult as its pretty much a case-by-case depending on what you're doing in
> the surrounding code.
>
> Generally I'm in favour of using established libraries (particularly for complex
> functionality like RCU) but in this case I think there's a tradeoff with raw
> performance.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 1/3] log: add TQS log type
2018-11-27 22:24 ` Stephen Hemminger
@ 2018-11-28 5:58 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-28 5:58 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
nd
>
> On Wed, 21 Nov 2018 21:30:53 -0600
> Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
>
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > ---
> > lib/librte_eal/common/include/rte_log.h | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/lib/librte_eal/common/include/rte_log.h
> b/lib/librte_eal/common/include/rte_log.h
> > index 2f789cb90..b4e91a4a5 100644
> > --- a/lib/librte_eal/common/include/rte_log.h
> > +++ b/lib/librte_eal/common/include/rte_log.h
> > @@ -61,6 +61,7 @@ extern struct rte_logs rte_logs;
> > #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */
> > #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */
> > #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */
> > +#define RTE_LOGTYPE_TQS 21 /**< Log related to Thread Quiescent State.
> */
> >
> > /* these log types can be used in an application */
> > #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */
>
> Sorry, I don't think this is the right way now.
Ok. I see some examples for the libraries already. I will change it in next version.
>
> All new logging should be using dynamic log types.
> We should work on getting rid of others (EFD, EVENTDEV, GSO).
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-11-27 21:32 ` Honnappa Nagarahalli
@ 2018-11-28 15:25 ` Ananyev, Konstantin
2018-12-07 7:27 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2018-11-28 15:25 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: nd, Dharmik Thakkar, Malvika Gupta, Gavin Hu (Arm Technology China), nd
> >
> > Hi Honnappa,
> Thank you for reviewing the patch, appreciate your comments.
>
> >
> > > +
> > > +/* Allocate a new TQS variable with the name *name* in memory. */
> > > +struct rte_tqs * __rte_experimental rte_tqs_alloc(const char *name,
> > > +int socket_id, uint64_t lcore_mask) {
> > > + char tqs_name[RTE_TQS_NAMESIZE];
> > > + struct rte_tailq_entry *te, *tmp_te;
> > > + struct rte_tqs_list *tqs_list;
> > > + struct rte_tqs *v, *tmp_v;
> > > + int ret;
> > > +
> > > + if (name == NULL) {
> > > + RTE_LOG(ERR, TQS, "Invalid input parameters\n");
> > > + rte_errno = -EINVAL;
> > > + return NULL;
> > > + }
> > > +
> > > + te = rte_zmalloc("TQS_TAILQ_ENTRY", sizeof(*te), 0);
> > > + if (te == NULL) {
> > > + RTE_LOG(ERR, TQS, "Cannot reserve memory for tailq\n");
> > > + rte_errno = -ENOMEM;
> > > + return NULL;
> > > + }
> > > +
> > > + snprintf(tqs_name, sizeof(tqs_name), "%s", name);
> > > + v = rte_zmalloc_socket(tqs_name, sizeof(struct rte_tqs),
> > > + RTE_CACHE_LINE_SIZE, socket_id);
> > > + if (v == NULL) {
> > > + RTE_LOG(ERR, TQS, "Cannot reserve memory for TQS
> > variable\n");
> > > + rte_errno = -ENOMEM;
> > > + goto alloc_error;
> > > + }
> > > +
> > > + ret = snprintf(v->name, sizeof(v->name), "%s", name);
> > > + if (ret < 0 || ret >= (int)sizeof(v->name)) {
> > > + rte_errno = -ENAMETOOLONG;
> > > + goto alloc_error;
> > > + }
> > > +
> > > + te->data = (void *) v;
> > > + v->lcore_mask = lcore_mask;
> > > +
> > > + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> > > +
> > > + tqs_list = RTE_TAILQ_CAST(rte_tqs_tailq.head, rte_tqs_list);
> > > +
> > > + /* Search if a TQS variable with the same name exists already */
> > > + TAILQ_FOREACH(tmp_te, tqs_list, next) {
> > > + tmp_v = (struct rte_tqs *) tmp_te->data;
> > > + if (strncmp(name, tmp_v->name, RTE_TQS_NAMESIZE) == 0)
> > > + break;
> > > + }
> > > +
> > > + if (tmp_te != NULL) {
> > > + rte_errno = -EEXIST;
> > > + goto tqs_exist;
> > > + }
> > > +
> > > + TAILQ_INSERT_TAIL(tqs_list, te, next);
> > > +
> > > + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> > > +
> > > + return v;
> > > +
> > > +tqs_exist:
> > > + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> > > +
> > > +alloc_error:
> > > + rte_free(te);
> > > + rte_free(v);
> > > + return NULL;
> > > +}
> >
> > That seems quite heavy-weight function just to allocate sync variable.
> > As size of struct rte_tqs is constant and known to the user, might be better just
> > provide rte_tqs_init(struct rte_tqs *tqs, ...) and let user allocate/free memory
> > for it by himself.
> >
> I believe, when you say heavy-weight, you are referring to adding tqs variable to the TAILQ and allocating the memory for it.
Yes.
> Agree. I also
> do not expect that there are a whole lot of tqs variables used in an application. Even in rte_tqs_free, there is similar overhead.
>
> The extra part is due to the way the TQS variable will get identified by data plane threads. I am thinking that a data plane thread will use the
> rte_tqs_lookup API to identify a TQS variable. However, it is possible to share this with data plane threads via a simple shared structure as
> well.
>
> Along with not allocating the memory, are you suggesting that we could skip maintaining a list of TQS variables in the TAILQ? This will
> remove rte_tqs_lookup, rte_tqs_free, rte_tqs_list_dump APIs. I am fine with this approach.
Yes, that's what I suggest.
My thought was - it is just another data structure used for synchronization (as spinlock, rwlock, etc.).
So should be possible to allocate it statically and we probably don't need to have an ability to lookup
such variable by name via tailq.
>
> > > +
> > > +/* Add a reader thread, running on an lcore, to the list of threads
> > > + * reporting their quiescent state on a TQS variable.
> > > + */
> > > +int __rte_experimental
> > > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > RTE_TQS_MAX_LCORE),
> > > + -EINVAL);
> >
> > It is not very good practice to make function return different values and behave
> > in a different way in debug/non-debug mode.
> > I'd say that for slow-path (functions in .c) it is always good to check input
> > parameters.
> > For fast-path (functions in .h) we sometimes skip such checking, but debug
> > mode can probably use RTE_ASSERT() or so.
> Makes sense, I will change this in the next version.
>
> >
> >
> > lcore_id >= RTE_TQS_MAX_LCORE
> >
> > Is this limitation really necessary?
> I added this limitation because currently DPDK application cannot take a mask more than 64bit wide. Otherwise, this should be as big as
> RTE_MAX_LCORE.
> I see that in the case of '-lcores' option, the number of lcores can be more than the number of PEs. In this case, we still need a MAX limit
> (but can be bigger than 64).
>
> > First it means that only lcores can use that API (at least data-path part), second
> > even today many machines have more than 64 cores.
> > I think you can easily avoid such limitation, if instead of requiring lcore_id as
> > input parameter, you'll just make it return index of next available entry in w[].
> > Then tqs_update() can take that index as input parameter.
> I had thought about a similar approach based on IDs. I was concerned that ID will be one more thing to manage for the application. But, I
> see the limitations of the current approach now. I will change it to allocation based. This will support even non-EAL pthreads as well.
Yes, with such approach non-lcore threads will be able to use it also.
> >
> > > +
> >
> > > + /* Worker thread has to count the quiescent states
> > > + * only from the current value of token.
> > > + */
> > > + v->w[lcore_id].cnt = v->token;
> >
> > Wonder what would happen, if new reader will call register(), after writer calls
> > start()?
> > Looks like a race-condition.
> > Or such pattern is not supported?
> The start should be called only after the reference to the entry in the data structure is 'deleted'. Hence the new reader will not get the
> reference to the deleted entry and does not have to increment its counter. When rte_tqs_check is called, it will see that the counter is
> already up to date. (I am missing a load-acquire on the token, I will correct that in the next version).
Yes, with _acquire_ in place it seems to be good here.
>
> >
> > > +
> > > + /* Release the store to initial TQS count so that workers
> > > + * can use it immediately after this function returns.
> > > + */
> > > + __atomic_fetch_or(&v->lcore_mask, (1UL << lcore_id),
> > > +__ATOMIC_RELEASE);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Trigger the worker threads to report the quiescent state
> > > + * status.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe and can be called from the worker threads as well.
> > > + *
> > > + * @param v
> > > + * TQS variable
> > > + * @param n
> > > + * Expected number of times the quiescent state is entered
> > > + * @param t
> > > + * - If successful, this is the token for this call of the API.
> > > + * This should be passed to rte_tqs_check API.
> > > + * @return
> > > + * - -EINVAL if the parameters are invalid (debug mode compilation only).
> > > + * - 0 Otherwise and always (non-debug mode compilation).
> > > + */
> > > +static __rte_always_inline int __rte_experimental
> > > +rte_tqs_start(struct rte_tqs *v, unsigned int n, uint32_t *t) {
> > > + TQS_RETURN_IF_TRUE((v == NULL || t == NULL), -EINVAL);
> > > +
> > > + /* This store release will ensure that changes to any data
> > > + * structure are visible to the workers before the token
> > > + * update is visible.
> > > + */
> > > + *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Update quiescent state for the worker thread on a lcore.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > + * All the worker threads registered to report their quiescent state
> > > + * on the TQS variable must call this API.
> > > + *
> > > + * @param v
> > > + * TQS variable
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_tqs_update(struct rte_tqs *v, unsigned int lcore_id) {
> > > + uint32_t t;
> > > +
> > > + TQS_ERR_LOG_IF_TRUE(v == NULL || lcore_id >=
> > RTE_TQS_MAX_LCORE);
> > > +
> > > + /* Load the token before the worker thread loads any other
> > > + * (lock-free) data structure. This ensures that updates
> > > + * to the data structures are visible if the update
> > > + * to token is visible.
> > > + */
> > > + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> >
> > Hmm, I am not very familiar with C11 model, but it looks like a race condition
> > to me:
> > as I understand, update() supposed be called at the end of reader's critical
> > section, correct?
> Yes, the understanding is correct.
>
> > But ACQUIRE is only a hoist barrier, which means compiler and cpu are free to
> > move earlier reads (and writes) after it.
> Yes, your understanding is correct.
>
> > It probably needs to be a full ACQ_REL here.
> >
> The sequence of operations is as follows:
> 1) Writer 'deletes' an entry from a lock-free data structure
> 2) Writer calls rte_tqs_start - This API increments the 'token' and does a store-release. So, any earlier stores would be visible if the store to
> 'token' is visible (to the data plane threads).
> 3) Reader calls rte_tqs_update - This API load-acquires the 'token'.
> a) If this 'token' is the updated value from 2) then the entry deleted from 1) will not be available for the reader to reference (even if
> that reference is due to earlier reads being moved after load-acquire of 'token').
> b) If this 'token' is not the updated value from 2) then the entry deleted from 1) may or may not be available for the reader to
> reference. In this case the w[lcore_id].cnt is not updated, hence the writer will wait to 'free' the deleted entry from 1)
Yes, you right, it's me being confused.
>
>
> > > + if (v->w[lcore_id].cnt != t)
> > > + v->w[lcore_id].cnt++;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Checks if all the worker threads have entered the quiescent state
> > > + * 'n' number of times. 'n' is provided in rte_tqs_start API.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe and can be called from the worker threads as well.
> > > + *
> > > + * @param v
> > > + * TQS variable
> > > + * @param t
> > > + * Token returned by rte_tqs_start API
> > > + * @param wait
> > > + * If true, block till all the worker threads have completed entering
> > > + * the quiescent state 'n' number of times
> > > + * @return
> > > + * - 0 if all worker threads have NOT passed through specified number
> > > + * of quiescent states.
> > > + * - 1 if all worker threads have passed through specified number
> > > + * of quiescent states.
> > > + * - -EINVAL if the parameters are invalid (debug mode compilation only).
> > > + */
> > > +static __rte_always_inline int __rte_experimental
> > > +rte_tqs_check(struct rte_tqs *v, uint32_t t, bool wait) {
> > > + uint64_t l;
> > > + uint64_t lcore_mask;
> > > +
> > > + TQS_RETURN_IF_TRUE((v == NULL), -EINVAL);
> > > +
> > > + do {
> > > + /* Load the current lcore_mask before loading the
> > > + * worker thread quiescent state counters.
> > > + */
> > > + lcore_mask = __atomic_load_n(&v->lcore_mask,
> > __ATOMIC_ACQUIRE);
> >
> > What would happen if reader will call unregister() simultaneously with check()
> > and will update lcore_mask straight after that load?
> > As I understand check() might hang in such case.
> If the 'lcore_mask' is updated after this load, it will affect only the current iteration of the while loop below. In the next iteration the
> 'lcore_mask' is loaded again.
True, my confusion again.
>
> >
> > > +
> > > + while (lcore_mask) {
> > > + l = __builtin_ctz(lcore_mask);
> > > + if (v->w[l].cnt != t)
> > > + break;
> >
> > As I understand, that makes control-path function progress dependent on
> > simultaneous invocation of data-path functions.
> I agree that the control-path function progress (for ex: how long to wait for freeing the memory) depends on invocation of the data-path
> functions. The separation of 'start', 'check' and the option not to block in 'check' provide the flexibility for control-path to do some other
> work if it chooses to.
>
> > In some cases that might cause control-path to hang.
> > Let say if data-path function wouldn't be called, or user invokes control-path
> > and data-path functions from the same thread.
> I agree with the case of data-path function not getting called. I would consider that as programming error. I can document that warning in
> the rte_tqs_check API.
Sure, it can be documented.
Though that means, that each data-path thread would have to do explicit update() call
for every tqs it might use.
I just think that it would complicate things and might limit usage of the library quite significantly.
>
> In the case of same thread calling both control-path and data-path functions, it would depend on the sequence of the calls. The following
> sequence should not cause any hangs:
> Worker thread
> 1) 'deletes' an entry from a lock-free data structure
> 2) rte_tqs_start
> 3) rte_tqs_update
> 4) rte_tqs_check (wait == 1 or wait == 0)
> 5) 'free' the entry deleted in 1)
That an interesting idea, and that should help, I think.
Probably worth to have {2,3,4} sequence as a new high level function.
>
> If 3) and 4) are interchanged, then there will be a hang if wait is set to 1. If wait is set to 0, there should not be a hang.
> I can document this as part of the documentation (I do not think API documentation is required for this).
>
> >
> > > +
> > > + lcore_mask &= ~(1UL << l);
> > > + }
> > > +
> > > + if (lcore_mask == 0)
> > > + return 1;
> > > +
> > > + rte_pause();
> > > + } while (wait);
> > > +
> > > + return 0;
> > > +}
> > > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-28 5:31 ` Honnappa Nagarahalli
@ 2018-11-28 23:23 ` Stephen Hemminger
2018-11-30 2:13 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Stephen Hemminger @ 2018-11-28 23:23 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Van Haaren, Harry, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China)
On Wed, 28 Nov 2018 05:31:56 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > > Mixed feelings about this one.
> > >
> > > Love to see RCU used for more things since it is much better than
> > > reader/writer locks for many applications. But hate to see DPDK
> > > reinventing every other library and not reusing code. Userspace RCU
> > > https://liburcu.org/ is widely supported by distro's, more throughly
> > > tested and documented, and more flexiple.
> > >
> > > The issue with many of these reinventions is a tradeoff of DPDK
> > > growing another dependency on external library versus using common code.
> > >
> Agree with the dependency issues. Sometimes flexibility also causes confusion and features that are not necessarily required for a targeted use case. I have seen that much of the functionality that can be left to the application is implemented as part of the library.
> I think having it in DPDK will give us control over the amount of capability this library will have and freedom over changes we would like to make to such a library. I also view DPDK as one package where all things required for data plane development are available.
>
> > > For RCU, the big issue for me is the testing and documentation of how
> > > to do RCU safely. Many people get it wrong!
> Hopefully, we all will do a better job collectively :)
>
> >
Reinventing RCU is not helping anyone.
DPDK needs to fix its dependency model, and just admit that it is ok
to build off of more than glibc.
Having used liburcu, it can be done in a small manner and really isn't that
confusing.
Is your real issue the LGPL license of liburcu for your skittish customers?
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-28 23:23 ` Stephen Hemminger
@ 2018-11-30 2:13 ` Honnappa Nagarahalli
2018-11-30 16:26 ` Luca Boccassi
2018-11-30 20:56 ` Mattias Rönnblom
0 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-30 2:13 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Van Haaren, Harry, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
Honnappa Nagarahalli, nd
>
> > > > Mixed feelings about this one.
> > > >
> > > > Love to see RCU used for more things since it is much better than
> > > > reader/writer locks for many applications. But hate to see DPDK
> > > > reinventing every other library and not reusing code. Userspace
> > > > RCU https://liburcu.org/ is widely supported by distro's, more
> > > > throughly tested and documented, and more flexiple.
> > > >
> > > > The issue with many of these reinventions is a tradeoff of DPDK
> > > > growing another dependency on external library versus using common
> code.
> > > >
> > Agree with the dependency issues. Sometimes flexibility also causes confusion
> and features that are not necessarily required for a targeted use case. I have
> seen that much of the functionality that can be left to the application is
> implemented as part of the library.
> > I think having it in DPDK will give us control over the amount of capability this
> library will have and freedom over changes we would like to make to such a
> library. I also view DPDK as one package where all things required for data
> plane development are available.
> >
> > > > For RCU, the big issue for me is the testing and documentation of
> > > > how to do RCU safely. Many people get it wrong!
> > Hopefully, we all will do a better job collectively :)
> >
> > >
>
> Reinventing RCU is not helping anyone.
IMO, this depends on what the rte_tqs has to offer and what the requirements are. Before starting this patch, I looked at the liburcu APIs. I have to say, fairly quickly (no offense) I concluded that this does not address DPDK's needs. I took a deeper look at the APIs/code in the past day and I still concluded the same. My partial analysis (analysis of more APIs can be done, I do not have cycles at this point) is as follows:
The reader threads' information is maintained in a linked list[1]. This linked list is protected by a mutex lock[2]. Any additions/deletions/traversals of this list are blocking and cannot happen in parallel.
The API, 'synchronize_rcu' [3] (similar functionality to rte_tqs_check call) is a blocking call. There is no option provided to make it non-blocking. The writer spins cycles while waiting for the grace period to get over.
'synchronize_rcu' also has grace period lock [4]. If I have multiple writers running on data plane threads, I cannot call this API to reclaim the memory in the worker threads as it will block other worker threads. This means, there is an extra thread required (on the control plane?) which does garbage collection and a method to push the pointers from worker threads to the garbage collection thread. This also means the time duration from delete to free increases putting pressure on amount of memory held up.
Since this API cannot be called concurrently by multiple writers, each writer has to wait for other writer's grace period to get over (i.e. multiple writer threads cannot overlap their grace periods).
This API also has to traverse the linked list which is not very well suited for calling on data plane.
I have not gone too much into rcu_thread_offline[5] API. This again needs to be used in worker cores and does not look to be very optimal.
I have glanced at rcu_quiescent_state [6], it wakes up the thread calling 'synchronize_rcu' which seems good amount of code for the data plane.
[1] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h#L85
[2] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c#L68
[3] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c#L344
[4] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c#L58
[5] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h#L211
[6] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h#L193
Coming to what is provided in rte_tqs:
The synchronize_rcu functionality is split in to 2 APIs: rte_tqs_start and rte_tqs_check. The reader data is maintained as an array.
Both the APIs are lock-free, allowing them to be called from multiple threads concurrently. This allows multiple writers to wait for their grace periods concurrently as well as overlap their grace periods. rte_tqs_start API returns a token which provides the ability to separate the quiescent state waiting of different writers. Hence, no writer waits for other writer's grace period to get over.
Since these 2 APIs are lock-free, they can be called from writers running on worker cores as well without the need for a separate thread to do garbage collection.
The separation into 2 APIs provides the ability for writers to not spin cycles waiting for the grace period to get over. This enables different ways of doing garbage collection. For ex: a data structure delete API could remove the entry from the data structure, call rte_tqs_start and return back to the caller. On the invocation of next API call of the library, the API can call rte_tqs_check (which will mostly indicate that the grace period is complete) and free the previously deleted entry.
rte_tqs_update (mapping to rcu_quiescent_state) is pretty small and simple.
rte_tqs_register and rte_tqs_unregister APIs are lock free. Hence additional APIs like rcu_thread_online and rcu_thread_offline are not required. The rte_tqs_unregister API (when compared to rcu_thread_offline) is much simple and conducive to be used in worker threads.
>
>
> DPDK needs to fix its dependency model, and just admit that it is ok to build off
> of more than glibc.
>
> Having used liburcu, it can be done in a small manner and really isn't that
> confusing.
>
> Is your real issue the LGPL license of liburcu for your skittish customers?
I have not had any discussions on this. Customers are mainly focused on having a solution on which they have meaningful control. They want to be able to submit a patch and change things if required. For ex: barriers for Arm [7] are not optimal. How easy is it to change this and get it into distros (there are both internal and external factors here)?
[7] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/arch/arm.h#L44
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-27 22:49 ` Van Haaren, Harry
2018-11-28 5:31 ` Honnappa Nagarahalli
@ 2018-11-30 2:25 ` Honnappa Nagarahalli
2018-11-30 21:03 ` Mattias Rönnblom
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-30 2:25 UTC (permalink / raw)
To: Van Haaren, Harry, Stephen Hemminger
Cc: dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
nd
> > >
> >
> > Mixed feelings about this one.
> >
> > Love to see RCU used for more things since it is much better than
> > reader/writer locks for many applications. But hate to see DPDK
> > reinventing every other library and not reusing code. Userspace RCU
> > https://liburcu.org/ is widely supported by distro's, more throughly
> > tested and documented, and more flexiple.
> >
> > The issue with many of these reinventions is a tradeoff of DPDK
> > growing another dependency on external library versus using common code.
> >
> > For RCU, the big issue for me is the testing and documentation of how
> > to do RCU safely. Many people get it wrong!
>
>
> Some notes on liburcu (and my amateur understanding of LGPL, I'm not a
> license lawyer :)
>
> Liburcu is LGPL, which AFAIK means we must dynamically link applications if
> the application code is BSD or other permissive licenses.
>
> The side effect of this is that urcu function calls must be "real" function calls
> and inlining them is not possible. Therefore using liburcu in LGPL mode could
> have a performance impact in this case. I expect estimating the performance
> cost would be
> difficult as its pretty much a case-by-case depending on what you're doing in
> the surrounding code.
>
> Generally I'm in favour of using established libraries (particularly for complex
> functionality like RCU) but in this case I think there's a tradeoff with raw
> performance.
The licensing info [1] is very interesting. Again I am no lawyer :)
[1] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h#L184
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-30 2:13 ` Honnappa Nagarahalli
@ 2018-11-30 16:26 ` Luca Boccassi
2018-11-30 18:32 ` Stephen Hemminger
2018-11-30 20:20 ` Honnappa Nagarahalli
2018-11-30 20:56 ` Mattias Rönnblom
1 sibling, 2 replies; 260+ messages in thread
From: Luca Boccassi @ 2018-11-30 16:26 UTC (permalink / raw)
To: Honnappa Nagarahalli, Stephen Hemminger
Cc: Van Haaren, Harry, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China)
On Fri, 2018-11-30 at 02:13 +0000, Honnappa Nagarahalli wrote:
> >
> > > > > Mixed feelings about this one.
> > > > >
> > > > > Love to see RCU used for more things since it is much better
> > > > > than
> > > > > reader/writer locks for many applications. But hate to see
> > > > > DPDK
> > > > > reinventing every other library and not reusing code.
> > > > > Userspace
> > > > > RCU https://liburcu.org/ is widely supported by distro's,
> > > > > more
> > > > > throughly tested and documented, and more flexiple.
> > > > >
> > > > > The issue with many of these reinventions is a tradeoff of
> > > > > DPDK
> > > > > growing another dependency on external library versus using
> > > > > common
> >
> > code.
> > > > >
> > >
> > > Agree with the dependency issues. Sometimes flexibility also
> > > causes confusion
> >
> > and features that are not necessarily required for a targeted use
> > case. I have
> > seen that much of the functionality that can be left to the
> > application is
> > implemented as part of the library.
> > > I think having it in DPDK will give us control over the amount of
> > > capability this
> >
> > library will have and freedom over changes we would like to make to
> > such a
> > library. I also view DPDK as one package where all things required
> > for data
> > plane development are available.
> > >
> > > > > For RCU, the big issue for me is the testing and
> > > > > documentation of
> > > > > how to do RCU safely. Many people get it wrong!
> > >
> > > Hopefully, we all will do a better job collectively :)
> > >
> > > >
> >
> > Reinventing RCU is not helping anyone.
>
> IMO, this depends on what the rte_tqs has to offer and what the
> requirements are. Before starting this patch, I looked at the liburcu
> APIs. I have to say, fairly quickly (no offense) I concluded that
> this does not address DPDK's needs. I took a deeper look at the
> APIs/code in the past day and I still concluded the same. My partial
> analysis (analysis of more APIs can be done, I do not have cycles at
> this point) is as follows:
>
> The reader threads' information is maintained in a linked list[1].
> This linked list is protected by a mutex lock[2]. Any
> additions/deletions/traversals of this list are blocking and cannot
> happen in parallel.
>
> The API, 'synchronize_rcu' [3] (similar functionality to
> rte_tqs_check call) is a blocking call. There is no option provided
> to make it non-blocking. The writer spins cycles while waiting for
> the grace period to get over.
>
> 'synchronize_rcu' also has grace period lock [4]. If I have multiple
> writers running on data plane threads, I cannot call this API to
> reclaim the memory in the worker threads as it will block other
> worker threads. This means, there is an extra thread required (on the
> control plane?) which does garbage collection and a method to push
> the pointers from worker threads to the garbage collection thread.
> This also means the time duration from delete to free increases
> putting pressure on amount of memory held up.
> Since this API cannot be called concurrently by multiple writers,
> each writer has to wait for other writer's grace period to get over
> (i.e. multiple writer threads cannot overlap their grace periods).
>
> This API also has to traverse the linked list which is not very well
> suited for calling on data plane.
>
> I have not gone too much into rcu_thread_offline[5] API. This again
> needs to be used in worker cores and does not look to be very
> optimal.
>
> I have glanced at rcu_quiescent_state [6], it wakes up the thread
> calling 'synchronize_rcu' which seems good amount of code for the
> data plane.
>
> [1] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/st
> atic/urcu-qsbr.h#L85
> [2] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c
> #L68
> [3] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c
> #L344
> [4] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c
> #L58
> [5] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/st
> atic/urcu-qsbr.h#L211
> [6] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/st
> atic/urcu-qsbr.h#L193
>
> Coming to what is provided in rte_tqs:
>
> The synchronize_rcu functionality is split in to 2 APIs:
> rte_tqs_start and rte_tqs_check. The reader data is maintained as an
> array.
>
> Both the APIs are lock-free, allowing them to be called from multiple
> threads concurrently. This allows multiple writers to wait for their
> grace periods concurrently as well as overlap their grace periods.
> rte_tqs_start API returns a token which provides the ability to
> separate the quiescent state waiting of different writers. Hence, no
> writer waits for other writer's grace period to get over.
> Since these 2 APIs are lock-free, they can be called from writers
> running on worker cores as well without the need for a separate
> thread to do garbage collection.
>
> The separation into 2 APIs provides the ability for writers to not
> spin cycles waiting for the grace period to get over. This enables
> different ways of doing garbage collection. For ex: a data structure
> delete API could remove the entry from the data structure, call
> rte_tqs_start and return back to the caller. On the invocation of
> next API call of the library, the API can call rte_tqs_check (which
> will mostly indicate that the grace period is complete) and free the
> previously deleted entry.
>
> rte_tqs_update (mapping to rcu_quiescent_state) is pretty small and
> simple.
>
> rte_tqs_register and rte_tqs_unregister APIs are lock free. Hence
> additional APIs like rcu_thread_online and rcu_thread_offline are not
> required. The rte_tqs_unregister API (when compared to
> rcu_thread_offline) is much simple and conducive to be used in worker
> threads.
liburcu has many flavours already, qsbr being one of them. If none of
those are optimal for this use case, why not work with upstream to
either improve the existing flavours, or add a new one?
You have the specific knowledge and expertise about the requirements
needs for the implementation for this use case, and they have the long-
time and extensive experience maintaining the library on a wide range
of systems and use cases. Why not combine both?
I might be wrong, but to me, nothing described above seems to be
particularly or uniquely tied to implementing a software dataplane or
to DPDK. IMHO we should pool and share wherever possible, rather than
build an ecosystem closed onto itself.
Just my 2c of course!
> > DPDK needs to fix its dependency model, and just admit that it is
> > ok to build off
> > of more than glibc.
> >
> > Having used liburcu, it can be done in a small manner and really
> > isn't that
> > confusing.
> >
> > Is your real issue the LGPL license of liburcu for your skittish
> > customers?
>
> I have not had any discussions on this. Customers are mainly focused
> on having a solution on which they have meaningful control. They want
> to be able to submit a patch and change things if required. For ex:
> barriers for Arm [7] are not optimal. How easy is it to change this
> and get it into distros (there are both internal and external factors
> here)?
It's just as easy (or as hard) as it is with DPDK. So it's either wait
for the distros to update, or rebuild locally.
--
Kind regards,
Luca Boccassi
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-30 16:26 ` Luca Boccassi
@ 2018-11-30 18:32 ` Stephen Hemminger
2018-11-30 20:20 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2018-11-30 18:32 UTC (permalink / raw)
To: Luca Boccassi
Cc: Honnappa Nagarahalli, Van Haaren, Harry, dev, nd,
Dharmik Thakkar, Malvika Gupta, Gavin Hu (Arm Technology China)
On Fri, 30 Nov 2018 16:26:25 +0000
Luca Boccassi <bluca@debian.org> wrote:
> On Fri, 2018-11-30 at 02:13 +0000, Honnappa Nagarahalli wrote:
> > >
> > > > > > Mixed feelings about this one.
> > > > > >
> > > > > > Love to see RCU used for more things since it is much better
> > > > > > than
> > > > > > reader/writer locks for many applications. But hate to see
> > > > > > DPDK
> > > > > > reinventing every other library and not reusing code.
> > > > > > Userspace
> > > > > > RCU https://liburcu.org/ is widely supported by distro's,
> > > > > > more
> > > > > > throughly tested and documented, and more flexiple.
> > > > > >
> > > > > > The issue with many of these reinventions is a tradeoff of
> > > > > > DPDK
> > > > > > growing another dependency on external library versus using
> > > > > > common
> > >
> > > code.
> > > > > >
> > > >
> > > > Agree with the dependency issues. Sometimes flexibility also
> > > > causes confusion
> > >
> > > and features that are not necessarily required for a targeted use
> > > case. I have
> > > seen that much of the functionality that can be left to the
> > > application is
> > > implemented as part of the library.
> > > > I think having it in DPDK will give us control over the amount of
> > > > capability this
> > >
> > > library will have and freedom over changes we would like to make to
> > > such a
> > > library. I also view DPDK as one package where all things required
> > > for data
> > > plane development are available.
> > > >
> > > > > > For RCU, the big issue for me is the testing and
> > > > > > documentation of
> > > > > > how to do RCU safely. Many people get it wrong!
> > > >
> > > > Hopefully, we all will do a better job collectively :)
> > > >
> > > > >
> > >
> > > Reinventing RCU is not helping anyone.
> >
> > IMO, this depends on what the rte_tqs has to offer and what the
> > requirements are. Before starting this patch, I looked at the liburcu
> > APIs. I have to say, fairly quickly (no offense) I concluded that
> > this does not address DPDK's needs. I took a deeper look at the
> > APIs/code in the past day and I still concluded the same. My partial
> > analysis (analysis of more APIs can be done, I do not have cycles at
> > this point) is as follows:
> >
> > The reader threads' information is maintained in a linked list[1].
> > This linked list is protected by a mutex lock[2]. Any
> > additions/deletions/traversals of this list are blocking and cannot
> > happen in parallel.
> >
> > The API, 'synchronize_rcu' [3] (similar functionality to
> > rte_tqs_check call) is a blocking call. There is no option provided
> > to make it non-blocking. The writer spins cycles while waiting for
> > the grace period to get over.
> >
> > 'synchronize_rcu' also has grace period lock [4]. If I have multiple
> > writers running on data plane threads, I cannot call this API to
> > reclaim the memory in the worker threads as it will block other
> > worker threads. This means, there is an extra thread required (on the
> > control plane?) which does garbage collection and a method to push
> > the pointers from worker threads to the garbage collection thread.
> > This also means the time duration from delete to free increases
> > putting pressure on amount of memory held up.
> > Since this API cannot be called concurrently by multiple writers,
> > each writer has to wait for other writer's grace period to get over
> > (i.e. multiple writer threads cannot overlap their grace periods).
> >
> > This API also has to traverse the linked list which is not very well
> > suited for calling on data plane.
> >
> > I have not gone too much into rcu_thread_offline[5] API. This again
> > needs to be used in worker cores and does not look to be very
> > optimal.
> >
> > I have glanced at rcu_quiescent_state [6], it wakes up the thread
> > calling 'synchronize_rcu' which seems good amount of code for the
> > data plane.
> >
> > [1] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/st
> > atic/urcu-qsbr.h#L85
> > [2] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c
> > #L68
> > [3] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c
> > #L344
> > [4] https://github.com/urcu/userspace-rcu/blob/master/src/urcu-qsbr.c
> > #L58
> > [5] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/st
> > atic/urcu-qsbr.h#L211
> > [6] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/st
> > atic/urcu-qsbr.h#L193
> >
> > Coming to what is provided in rte_tqs:
> >
> > The synchronize_rcu functionality is split in to 2 APIs:
> > rte_tqs_start and rte_tqs_check. The reader data is maintained as an
> > array.
> >
> > Both the APIs are lock-free, allowing them to be called from multiple
> > threads concurrently. This allows multiple writers to wait for their
> > grace periods concurrently as well as overlap their grace periods.
> > rte_tqs_start API returns a token which provides the ability to
> > separate the quiescent state waiting of different writers. Hence, no
> > writer waits for other writer's grace period to get over.
> > Since these 2 APIs are lock-free, they can be called from writers
> > running on worker cores as well without the need for a separate
> > thread to do garbage collection.
> >
> > The separation into 2 APIs provides the ability for writers to not
> > spin cycles waiting for the grace period to get over. This enables
> > different ways of doing garbage collection. For ex: a data structure
> > delete API could remove the entry from the data structure, call
> > rte_tqs_start and return back to the caller. On the invocation of
> > next API call of the library, the API can call rte_tqs_check (which
> > will mostly indicate that the grace period is complete) and free the
> > previously deleted entry.
> >
> > rte_tqs_update (mapping to rcu_quiescent_state) is pretty small and
> > simple.
> >
> > rte_tqs_register and rte_tqs_unregister APIs are lock free. Hence
> > additional APIs like rcu_thread_online and rcu_thread_offline are not
> > required. The rte_tqs_unregister API (when compared to
> > rcu_thread_offline) is much simple and conducive to be used in worker
> > threads.
>
> liburcu has many flavours already, qsbr being one of them. If none of
> those are optimal for this use case, why not work with upstream to
> either improve the existing flavours, or add a new one?
>
> You have the specific knowledge and expertise about the requirements
> needs for the implementation for this use case, and they have the long-
> time and extensive experience maintaining the library on a wide range
> of systems and use cases. Why not combine both?
> I might be wrong, but to me, nothing described above seems to be
> particularly or uniquely tied to implementing a software dataplane or
> to DPDK. IMHO we should pool and share wherever possible, rather than
> build an ecosystem closed onto itself.
>
> Just my 2c of course!
>
> > > DPDK needs to fix its dependency model, and just admit that it is
> > > ok to build off
> > > of more than glibc.
> > >
> > > Having used liburcu, it can be done in a small manner and really
> > > isn't that
> > > confusing.
> > >
> > > Is your real issue the LGPL license of liburcu for your skittish
> > > customers?
> >
> > I have not had any discussions on this. Customers are mainly focused
> > on having a solution on which they have meaningful control. They want
> > to be able to submit a patch and change things if required. For ex:
> > barriers for Arm [7] are not optimal. How easy is it to change this
> > and get it into distros (there are both internal and external factors
> > here)?
>
> It's just as easy (or as hard) as it is with DPDK. So it's either wait
> for the distros to update, or rebuild locally.
>
Either way it would be useful to have the broader RCU community
in on the discussion.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-30 16:26 ` Luca Boccassi
2018-11-30 18:32 ` Stephen Hemminger
@ 2018-11-30 20:20 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-11-30 20:20 UTC (permalink / raw)
To: Luca Boccassi, Stephen Hemminger
Cc: Van Haaren, Harry, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
Honnappa Nagarahalli, nd
Hi Luca,
Appreciate your comments.
>
> liburcu has many flavours already, qsbr being one of them. If none of those
> are optimal for this use case, why not work with upstream to either improve
> the existing flavours, or add a new one?
>
> You have the specific knowledge and expertise about the requirements
> needs for the implementation for this use case, and they have the long-
> time and extensive experience maintaining the library on a wide range of
> systems and use cases. Why not combine both?
> I might be wrong, but to me, nothing described above seems to be
> particularly or uniquely tied to implementing a software dataplane or to
> DPDK.
This comment does not help much, I would prefer a more concrete comment. If possible, can you please mention where else these features are required? IMO, if these were required for other use cases, they would have been present in liburcu already.
IMHO we should pool and share wherever possible, rather than build
> an ecosystem closed onto itself.
>
> Just my 2c of course!
I would say this is true for some of the other libraries in DPDK as well [1] or in fact for the whole of DPDK. Linux Kernel has introduced XDP. I would say Linux Kernel has much vibrant history and community. If XDP lacks some of the features we want, why not work with Linux community to improve it and not do DPDK?
[1] https://github.com/urcu/userspace-rcu/blob/master/doc/uatomic-api.md
IMO, we should focus our discussion on how relevant rte_tqs is to DPDK, whether it solves the problems people face while using DPDK and the value add it brings. This should be the basis for the acceptance rather than what is available elsewhere.
>
> > > DPDK needs to fix its dependency model, and just admit that it is ok
> > > to build off of more than glibc.
> > >
> > > Having used liburcu, it can be done in a small manner and really
> > > isn't that confusing.
> > >
> > > Is your real issue the LGPL license of liburcu for your skittish
> > > customers?
> >
> > I have not had any discussions on this. Customers are mainly focused
> > on having a solution on which they have meaningful control. They want
> > to be able to submit a patch and change things if required. For ex:
> > barriers for Arm [7] are not optimal. How easy is it to change this
> > and get it into distros (there are both internal and external factors
> > here)?
>
> It's just as easy (or as hard) as it is with DPDK. So it's either wait for the
> distros to update, or rebuild locally.
>
I have not worked with that community, hence I cannot comment on that. Apologies for asking some hard questions, but, do you know how easy it is to change the license for liburcu? Can the DPDK governing board take a decision to change the license for liburcu?
> --
> Kind regards,
> Luca Boccassi
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-30 2:13 ` Honnappa Nagarahalli
2018-11-30 16:26 ` Luca Boccassi
@ 2018-11-30 20:56 ` Mattias Rönnblom
2018-11-30 23:44 ` Stephen Hemminger
1 sibling, 1 reply; 260+ messages in thread
From: Mattias Rönnblom @ 2018-11-30 20:56 UTC (permalink / raw)
To: Honnappa Nagarahalli, Stephen Hemminger
Cc: Van Haaren, Harry, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China)
On 2018-11-30 03:13, Honnappa Nagarahalli wrote:
>>
>> Reinventing RCU is not helping anyone.
> IMO, this depends on what the rte_tqs has to offer and what the requirements are. Before starting this patch, I looked at the liburcu APIs. I have to say, fairly quickly (no offense) I concluded that this does not address DPDK's needs. I took a deeper look at the APIs/code in the past day and I still concluded the same. My partial analysis (analysis of more APIs can be done, I do not have cycles at this point) is as follows:
>
> The reader threads' information is maintained in a linked list[1]. This linked list is protected by a mutex lock[2]. Any additions/deletions/traversals of this list are blocking and cannot happen in parallel.
>
> The API, 'synchronize_rcu' [3] (similar functionality to rte_tqs_check call) is a blocking call. There is no option provided to make it non-blocking. The writer spins cycles while waiting for the grace period to get over.
>
Wouldn't the options be call_rcu, which rarely blocks, or defer_rcu()
which never? Why would the average application want to wait for the
grace period to be over anyway?
> 'synchronize_rcu' also has grace period lock [4]. If I have multiple writers running on data plane threads, I cannot call this API to reclaim the memory in the worker threads as it will block other worker threads. This means, there is an extra thread required (on the control plane?) which does garbage collection and a method to push the pointers from worker threads to the garbage collection thread. This also means the time duration from delete to free increases putting pressure on amount of memory held up.
> Since this API cannot be called concurrently by multiple writers, each writer has to wait for other writer's grace period to get over (i.e. multiple writer threads cannot overlap their grace periods).
"Real" DPDK applications typically have to interact with the outside
world using interfaces beyond DPDK packet I/O, and this is best done via
an intermediate "control plane" thread running in the DPDK application.
Typically, this thread would also be the RCU writer and "garbage
collector", I would say.
>
> This API also has to traverse the linked list which is not very well suited for calling on data plane.
>
> I have not gone too much into rcu_thread_offline[5] API. This again needs to be used in worker cores and does not look to be very optimal.
>
> I have glanced at rcu_quiescent_state [6], it wakes up the thread calling 'synchronize_rcu' which seems good amount of code for the data plane.
>
Wouldn't the typical DPDK lcore worker call rcu_quiescent_state() after
processing a burst of packets? If so, I would more lean toward
"negligible overhead", than "a good amount of code".
I must admit I didn't look at your library in detail, but I must still
ask: if TQS is basically RCU, why isn't it called RCU? And why isn't the
API calls named in a similar manner?
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-30 2:25 ` Honnappa Nagarahalli
@ 2018-11-30 21:03 ` Mattias Rönnblom
0 siblings, 0 replies; 260+ messages in thread
From: Mattias Rönnblom @ 2018-11-30 21:03 UTC (permalink / raw)
To: Honnappa Nagarahalli, Van Haaren, Harry, Stephen Hemminger
Cc: dev, nd, Dharmik Thakkar, Malvika Gupta, Gavin Hu (Arm Technology China)
On 2018-11-30 03:25, Honnappa Nagarahalli wrote:
>> Generally I'm in favour of using established libraries (particularly for complex
>> functionality like RCU) but in this case I think there's a tradeoff with raw
>> performance.
> The licensing info [1] is very interesting. Again I am no lawyer :)
>
> [1] https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h#L184
>
If you don't know the macro/inline function exception of LGPL 2.1, maybe
it's time to read the license text. Lawyer or not.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-30 20:56 ` Mattias Rönnblom
@ 2018-11-30 23:44 ` Stephen Hemminger
2018-12-01 18:37 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Stephen Hemminger @ 2018-11-30 23:44 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: Honnappa Nagarahalli, Van Haaren, Harry, dev, nd,
Dharmik Thakkar, Malvika Gupta, Gavin Hu (Arm Technology China)
On Fri, 30 Nov 2018 21:56:30 +0100
Mattias Rönnblom <mattias.ronnblom@ericsson.com> wrote:
> On 2018-11-30 03:13, Honnappa Nagarahalli wrote:
> >>
> >> Reinventing RCU is not helping anyone.
> > IMO, this depends on what the rte_tqs has to offer and what the requirements are. Before starting this patch, I looked at the liburcu APIs. I have to say, fairly quickly (no offense) I concluded that this does not address DPDK's needs. I took a deeper look at the APIs/code in the past day and I still concluded the same. My partial analysis (analysis of more APIs can be done, I do not have cycles at this point) is as follows:
> >
> > The reader threads' information is maintained in a linked list[1]. This linked list is protected by a mutex lock[2]. Any additions/deletions/traversals of this list are blocking and cannot happen in parallel.
> >
> > The API, 'synchronize_rcu' [3] (similar functionality to rte_tqs_check call) is a blocking call. There is no option provided to make it non-blocking. The writer spins cycles while waiting for the grace period to get over.
> >
>
> Wouldn't the options be call_rcu, which rarely blocks, or defer_rcu()
> which never? Why would the average application want to wait for the
> grace period to be over anyway?
>
> > 'synchronize_rcu' also has grace period lock [4]. If I have multiple writers running on data plane threads, I cannot call this API to reclaim the memory in the worker threads as it will block other worker threads. This means, there is an extra thread required (on the control plane?) which does garbage collection and a method to push the pointers from worker threads to the garbage collection thread. This also means the time duration from delete to free increases putting pressure on amount of memory held up.
> > Since this API cannot be called concurrently by multiple writers, each writer has to wait for other writer's grace period to get over (i.e. multiple writer threads cannot overlap their grace periods).
>
> "Real" DPDK applications typically have to interact with the outside
> world using interfaces beyond DPDK packet I/O, and this is best done via
> an intermediate "control plane" thread running in the DPDK application.
> Typically, this thread would also be the RCU writer and "garbage
> collector", I would say.
>
> >
> > This API also has to traverse the linked list which is not very well suited for calling on data plane.
> >
> > I have not gone too much into rcu_thread_offline[5] API. This again needs to be used in worker cores and does not look to be very optimal.
> >
> > I have glanced at rcu_quiescent_state [6], it wakes up the thread calling 'synchronize_rcu' which seems good amount of code for the data plane.
> >
>
> Wouldn't the typical DPDK lcore worker call rcu_quiescent_state() after
> processing a burst of packets? If so, I would more lean toward
> "negligible overhead", than "a good amount of code".
>
> I must admit I didn't look at your library in detail, but I must still
> ask: if TQS is basically RCU, why isn't it called RCU? And why isn't the
> API calls named in a similar manner?
We used liburcu at Brocade with DPDK. It was just a case of putting rcu_quiescent_state in the packet handling
loop. There were a bunch more cases where control thread needed to register/unregister as part of RCU.
I think any library would have that issue with user supplied threads. You need a "worry about me" and
a "don't worry about me" API in the library.
There is also a tradeoff between call_rcu and defer_rcu about what context the RCU callback happens.
You really need a control thread to handle the RCU cleanup.
The point is that RCU steps into the application design, and liburcu seems to be flexible enough
and well documented enough to allow for more options.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
2018-11-30 23:44 ` Stephen Hemminger
@ 2018-12-01 18:37 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-01 18:37 UTC (permalink / raw)
To: Stephen Hemminger, Mattias Rönnblom
Cc: Van Haaren, Harry, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
Honnappa Nagarahalli, nd
>
> On Fri, 30 Nov 2018 21:56:30 +0100
> Mattias Rönnblom <mattias.ronnblom@ericsson.com> wrote:
>
> > On 2018-11-30 03:13, Honnappa Nagarahalli wrote:
> > >>
> > >> Reinventing RCU is not helping anyone.
> > > IMO, this depends on what the rte_tqs has to offer and what the
> requirements are. Before starting this patch, I looked at the liburcu APIs. I
> have to say, fairly quickly (no offense) I concluded that this does not address
> DPDK's needs. I took a deeper look at the APIs/code in the past day and I still
> concluded the same. My partial analysis (analysis of more APIs can be done, I
> do not have cycles at this point) is as follows:
> > >
> > > The reader threads' information is maintained in a linked list[1]. This
> linked list is protected by a mutex lock[2]. Any additions/deletions/traversals
> of this list are blocking and cannot happen in parallel.
> > >
> > > The API, 'synchronize_rcu' [3] (similar functionality to rte_tqs_check call)
> is a blocking call. There is no option provided to make it non-blocking. The
> writer spins cycles while waiting for the grace period to get over.
> > >
> >
> > Wouldn't the options be call_rcu, which rarely blocks, or defer_rcu()
> > which never?
call_rcu (I do not know about defer_rcu, have you looked at the implementation to verify your claim?) requires a separate thread that does garbage collection (this forces a programming model, the thread is even launched by the library). call_rcu() allows you to batch and defer the work to the garbage collector thread. In the garbage collector thread, when 'synchronize_rcu' is called, it spins for at least 1 grace period. Deferring and batching also have the side effect that memory is being held up for longer time.
Why would the average application want to wait for the
> > grace period to be over anyway?
I assume when you say 'average application', you mean the writer(s) are on control plane.
It has been agreed (in the context of rte_hash) that writer(s) can be on data plane. In this case, 'synchronize_rcu' cannot be called from data plane. If call_rcu has to be called, it adds additional cycles to push the pointers (or any data) to the garbage collector thread to the data plane. I kindly suggest you to take a look for more details in liburcu code and the rte_tqs code.
Additionally, call_rcu function is more than 10 lines.
> >
> > > 'synchronize_rcu' also has grace period lock [4]. If I have multiple writers
> running on data plane threads, I cannot call this API to reclaim the memory in
> the worker threads as it will block other worker threads. This means, there is
> an extra thread required (on the control plane?) which does garbage
> collection and a method to push the pointers from worker threads to the
> garbage collection thread. This also means the time duration from delete to
> free increases putting pressure on amount of memory held up.
> > > Since this API cannot be called concurrently by multiple writers, each
> writer has to wait for other writer's grace period to get over (i.e. multiple
> writer threads cannot overlap their grace periods).
> >
> > "Real" DPDK applications typically have to interact with the outside
> > world using interfaces beyond DPDK packet I/O, and this is best done
> > via an intermediate "control plane" thread running in the DPDK application.
> > Typically, this thread would also be the RCU writer and "garbage
> > collector", I would say.
> >
Agree, that is one way to do it and it comes with its own issues as I described above.
> > >
> > > This API also has to traverse the linked list which is not very well suited for
> calling on data plane.
> > >
> > > I have not gone too much into rcu_thread_offline[5] API. This again needs
> to be used in worker cores and does not look to be very optimal.
> > >
> > > I have glanced at rcu_quiescent_state [6], it wakes up the thread calling
> 'synchronize_rcu' which seems good amount of code for the data plane.
> > >
> >
> > Wouldn't the typical DPDK lcore worker call rcu_quiescent_state()
> > after processing a burst of packets? If so, I would more lean toward
> > "negligible overhead", than "a good amount of code".
DPDK is being used in embedded and real time applications as well. There, processing a burst of packets is not possible due to low latency requirements. Hence it is not possible to amortize the cost.
> >
> > I must admit I didn't look at your library in detail, but I must still
> > ask: if TQS is basically RCU, why isn't it called RCU? And why isn't
> > the API calls named in a similar manner?
I kindly request you to take a look at the patch. More than that, if you have not done already, please take a look at the liburcu implementation as well.
TQS is not RCU (Read-Copy-Update). TQS helps implement RCU. TQS helps to understand when the threads have passed through the quiescent state.
I am also not sure why the name liburcu has RCU in it. It does not do any Read-Copy-Update.
>
>
> We used liburcu at Brocade with DPDK. It was just a case of putting
> rcu_quiescent_state in the packet handling
> loop. There were a bunch more cases where control thread needed to
> register/unregister as part of RCU.
I assume that the packet handling loop was a polling loop (correct me if I am wrong). With the support of event dev, we have rte_event_dequeue_burst API which supports blocking till the packets are available (or blocking for an extended period of time). This means that, before calling this API, the thread needs to inform "don't worry about me". Once, this API returns, it needs to inform "worry about me". So, these two APIs need to be efficient. Please look at rte_tqs_register/unregister APIs.
> I think any library would have that issue with user supplied threads. You need
> a "worry about me" and
> a "don't worry about me" API in the library.
>
> There is also a tradeoff between call_rcu and defer_rcu about what context
> the RCU callback happens.
> You really need a control thread to handle the RCU cleanup.
That is if you choose to use liburcu. rte_tqs provides the ability to do cleanup efficiently without the need for a control plane thread in DPDK use cases.
>
> The point is that RCU steps into the application design, and liburcu seems to
> be flexible enough
> and well documented enough to allow for more options.
Agree that RCU steps into application design. That is the reason rte_tqs just does enough and provides the flexibility to the application to implement the RCU however it feels like. DPDK has also stepped into application design by providing libraries like hash, LPM etc.
I do not understand why you think liburcu is flexible enough for DPDK use cases. I mentioned the specific use cases where liburcu is not useful. I did not find anything in the documentation to help me solve these use cases. Appreciate if you could help me understand how I can use liburcu to solve these use cases.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-11-28 15:25 ` Ananyev, Konstantin
@ 2018-12-07 7:27 ` Honnappa Nagarahalli
2018-12-07 17:29 ` Stephen Hemminger
2018-12-12 9:29 ` Ananyev, Konstantin
0 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-07 7:27 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
Honnappa Nagarahalli, nd
> >
> > > > +
> > > > +/* Add a reader thread, running on an lcore, to the list of
> > > > +threads
> > > > + * reporting their quiescent state on a TQS variable.
> > > > + */
> > > > +int __rte_experimental
> > > > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > > > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > > RTE_TQS_MAX_LCORE),
> > > > + -EINVAL);
> > >
> > > It is not very good practice to make function return different
> > > values and behave in a different way in debug/non-debug mode.
> > > I'd say that for slow-path (functions in .c) it is always good to
> > > check input parameters.
> > > For fast-path (functions in .h) we sometimes skip such checking, but
> > > debug mode can probably use RTE_ASSERT() or so.
> > Makes sense, I will change this in the next version.
> >
> > >
> > >
> > > lcore_id >= RTE_TQS_MAX_LCORE
> > >
> > > Is this limitation really necessary?
> > I added this limitation because currently DPDK application cannot take
> > a mask more than 64bit wide. Otherwise, this should be as big as
> RTE_MAX_LCORE.
> > I see that in the case of '-lcores' option, the number of lcores can
> > be more than the number of PEs. In this case, we still need a MAX limit (but
> can be bigger than 64).
> >
> > > First it means that only lcores can use that API (at least data-path
> > > part), second even today many machines have more than 64 cores.
> > > I think you can easily avoid such limitation, if instead of
> > > requiring lcore_id as input parameter, you'll just make it return index of
> next available entry in w[].
> > > Then tqs_update() can take that index as input parameter.
> > I had thought about a similar approach based on IDs. I was concerned
> > that ID will be one more thing to manage for the application. But, I see the
> limitations of the current approach now. I will change it to allocation based.
> This will support even non-EAL pthreads as well.
>
> Yes, with such approach non-lcore threads will be able to use it also.
>
I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore need to be efficient as they can be called from the worker's packet processing loop (rte_event_dequeue_burst allows blocking. So, the worker thread needs to call rte_tqs_unregister_lcore before calling rte_event_dequeue_burst and rte_tqs_register_lcore before starting packet processing). Allocating the thread ID in these functions will make them more complex.
I suggest that we change the argument 'lcore_id' to 'thread_id'. The application could use 'lcore_id' as 'thread_id' if threads are mapped to physical cores 1:1.
If the threads are not mapped 1:1 to physical cores, the threads need to use a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not see that DPDK has a thread_id concept. For TQS, the thread IDs are global (i.e. not per TQS variable). I could provide APIs to do the thread ID allocation, but I think the thread ID allocation should not be part of this library. Such thread ID might be useful for other libraries.
<snip>
>
> >
> > >
> > > > +
> > > > + while (lcore_mask) {
> > > > + l = __builtin_ctz(lcore_mask);
> > > > + if (v->w[l].cnt != t)
> > > > + break;
> > >
> > > As I understand, that makes control-path function progress dependent
> > > on simultaneous invocation of data-path functions.
> > I agree that the control-path function progress (for ex: how long to
> > wait for freeing the memory) depends on invocation of the data-path
> > functions. The separation of 'start', 'check' and the option not to block in
> 'check' provide the flexibility for control-path to do some other work if it
> chooses to.
> >
> > > In some cases that might cause control-path to hang.
> > > Let say if data-path function wouldn't be called, or user invokes
> > > control-path and data-path functions from the same thread.
> > I agree with the case of data-path function not getting called. I
> > would consider that as programming error. I can document that warning in
> the rte_tqs_check API.
>
> Sure, it can be documented.
> Though that means, that each data-path thread would have to do explicit
> update() call for every tqs it might use.
> I just think that it would complicate things and might limit usage of the library
> quite significantly.
Each data path thread has to report its quiescent state. Hence, each data-path thread has to call update() (similar to how rte_timer_manage() has to be called periodically on the worker thread).
Do you have any particular use case in mind where this fails?
>
> >
> > In the case of same thread calling both control-path and data-path
> > functions, it would depend on the sequence of the calls. The following
> sequence should not cause any hangs:
> > Worker thread
> > 1) 'deletes' an entry from a lock-free data structure
> > 2) rte_tqs_start
> > 3) rte_tqs_update
> > 4) rte_tqs_check (wait == 1 or wait == 0)
> > 5) 'free' the entry deleted in 1)
>
> That an interesting idea, and that should help, I think.
> Probably worth to have {2,3,4} sequence as a new high level function.
>
Yes, this is a good idea. Such a function would be applicable only in the worker thread. I would prefer to leave it to the application to take care.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-07 7:27 ` Honnappa Nagarahalli
@ 2018-12-07 17:29 ` Stephen Hemminger
2018-12-11 6:40 ` Honnappa Nagarahalli
2018-12-12 9:29 ` Ananyev, Konstantin
1 sibling, 1 reply; 260+ messages in thread
From: Stephen Hemminger @ 2018-12-07 17:29 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Ananyev, Konstantin, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China)
On Fri, 7 Dec 2018 07:27:16 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > >
> > > > > +
> > > > > +/* Add a reader thread, running on an lcore, to the list of
> > > > > +threads
> > > > > + * reporting their quiescent state on a TQS variable.
> > > > > + */
> > > > > +int __rte_experimental
> > > > > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > > > > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > > > RTE_TQS_MAX_LCORE),
> > > > > + -EINVAL);
> > > >
> > > > It is not very good practice to make function return different
> > > > values and behave in a different way in debug/non-debug mode.
> > > > I'd say that for slow-path (functions in .c) it is always good to
> > > > check input parameters.
> > > > For fast-path (functions in .h) we sometimes skip such checking, but
> > > > debug mode can probably use RTE_ASSERT() or so.
> > > Makes sense, I will change this in the next version.
> > >
> > > >
> > > >
> > > > lcore_id >= RTE_TQS_MAX_LCORE
> > > >
> > > > Is this limitation really necessary?
> > > I added this limitation because currently DPDK application cannot take
> > > a mask more than 64bit wide. Otherwise, this should be as big as
> > RTE_MAX_LCORE.
> > > I see that in the case of '-lcores' option, the number of lcores can
> > > be more than the number of PEs. In this case, we still need a MAX limit (but
> > can be bigger than 64).
> > >
> > > > First it means that only lcores can use that API (at least data-path
> > > > part), second even today many machines have more than 64 cores.
> > > > I think you can easily avoid such limitation, if instead of
> > > > requiring lcore_id as input parameter, you'll just make it return index of
> > next available entry in w[].
> > > > Then tqs_update() can take that index as input parameter.
> > > I had thought about a similar approach based on IDs. I was concerned
> > > that ID will be one more thing to manage for the application. But, I see the
> > limitations of the current approach now. I will change it to allocation based.
> > This will support even non-EAL pthreads as well.
> >
> > Yes, with such approach non-lcore threads will be able to use it also.
> >
> I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore need to be efficient as they can be called from the worker's packet processing loop (rte_event_dequeue_burst allows blocking. So, the worker thread needs to call rte_tqs_unregister_lcore before calling rte_event_dequeue_burst and rte_tqs_register_lcore before starting packet processing). Allocating the thread ID in these functions will make them more complex.
>
> I suggest that we change the argument 'lcore_id' to 'thread_id'. The application could use 'lcore_id' as 'thread_id' if threads are mapped to physical cores 1:1.
>
> If the threads are not mapped 1:1 to physical cores, the threads need to use a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not see that DPDK has a thread_id concept. For TQS, the thread IDs are global (i.e. not per TQS variable). I could provide APIs to do the thread ID allocation, but I think the thread ID allocation should not be part of this library. Such thread ID might be useful for other libraries.
>
> <snip
Thread id is problematic since Glibc doesn't want to give it out.
You have to roll your own function to do gettid().
It is not as easy as just that. Plus what about preemption?
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-07 17:29 ` Stephen Hemminger
@ 2018-12-11 6:40 ` Honnappa Nagarahalli
2018-12-13 12:26 ` Burakov, Anatoly
0 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-11 6:40 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Ananyev, Konstantin, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
nd
>
> > > >
> > > > > > +
> > > > > > +/* Add a reader thread, running on an lcore, to the list of
> > > > > > +threads
> > > > > > + * reporting their quiescent state on a TQS variable.
> > > > > > + */
> > > > > > +int __rte_experimental
> > > > > > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > > > > > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > > > > RTE_TQS_MAX_LCORE),
> > > > > > + -EINVAL);
> > > > >
> > > > > It is not very good practice to make function return different
> > > > > values and behave in a different way in debug/non-debug mode.
> > > > > I'd say that for slow-path (functions in .c) it is always good
> > > > > to check input parameters.
> > > > > For fast-path (functions in .h) we sometimes skip such checking,
> > > > > but debug mode can probably use RTE_ASSERT() or so.
> > > > Makes sense, I will change this in the next version.
> > > >
> > > > >
> > > > >
> > > > > lcore_id >= RTE_TQS_MAX_LCORE
> > > > >
> > > > > Is this limitation really necessary?
> > > > I added this limitation because currently DPDK application cannot
> > > > take a mask more than 64bit wide. Otherwise, this should be as big
> > > > as
> > > RTE_MAX_LCORE.
> > > > I see that in the case of '-lcores' option, the number of lcores
> > > > can be more than the number of PEs. In this case, we still need a
> > > > MAX limit (but
> > > can be bigger than 64).
> > > >
> > > > > First it means that only lcores can use that API (at least
> > > > > data-path part), second even today many machines have more than 64
> cores.
> > > > > I think you can easily avoid such limitation, if instead of
> > > > > requiring lcore_id as input parameter, you'll just make it
> > > > > return index of
> > > next available entry in w[].
> > > > > Then tqs_update() can take that index as input parameter.
> > > > I had thought about a similar approach based on IDs. I was
> > > > concerned that ID will be one more thing to manage for the
> > > > application. But, I see the
> > > limitations of the current approach now. I will change it to allocation
> based.
> > > This will support even non-EAL pthreads as well.
> > >
> > > Yes, with such approach non-lcore threads will be able to use it also.
> > >
> > I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore need to be
> efficient as they can be called from the worker's packet processing loop
> (rte_event_dequeue_burst allows blocking. So, the worker thread needs to
> call rte_tqs_unregister_lcore before calling rte_event_dequeue_burst and
> rte_tqs_register_lcore before starting packet processing). Allocating the
> thread ID in these functions will make them more complex.
> >
> > I suggest that we change the argument 'lcore_id' to 'thread_id'. The
> application could use 'lcore_id' as 'thread_id' if threads are mapped to
> physical cores 1:1.
> >
> > If the threads are not mapped 1:1 to physical cores, the threads need to use
> a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not see that
> DPDK has a thread_id concept. For TQS, the thread IDs are global (i.e. not per
> TQS variable). I could provide APIs to do the thread ID allocation, but I think
> the thread ID allocation should not be part of this library. Such thread ID
> might be useful for other libraries.
> >
> > <snip
>
>
> Thread id is problematic since Glibc doesn't want to give it out.
> You have to roll your own function to do gettid().
> It is not as easy as just that. Plus what about preemption?
Agree. I looked into this further. The rte_gettid function uses a system call (BSD and Linux). I am not clear on the space of the ID returned (as well). I do not think it is guaranteed that it will be with in a narrow range that is required here.
My suggestion would be to add a set of APIs that would allow for allocation of thread IDs which are within a given range of 0 to <predefined MAX>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-07 7:27 ` Honnappa Nagarahalli
2018-12-07 17:29 ` Stephen Hemminger
@ 2018-12-12 9:29 ` Ananyev, Konstantin
2018-12-13 7:39 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2018-12-12 9:29 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: nd, Dharmik Thakkar, Malvika Gupta, Gavin Hu (Arm Technology China), nd
> > >
> > > > > +
> > > > > +/* Add a reader thread, running on an lcore, to the list of
> > > > > +threads
> > > > > + * reporting their quiescent state on a TQS variable.
> > > > > + */
> > > > > +int __rte_experimental
> > > > > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > > > > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > > > RTE_TQS_MAX_LCORE),
> > > > > + -EINVAL);
> > > >
> > > > It is not very good practice to make function return different
> > > > values and behave in a different way in debug/non-debug mode.
> > > > I'd say that for slow-path (functions in .c) it is always good to
> > > > check input parameters.
> > > > For fast-path (functions in .h) we sometimes skip such checking, but
> > > > debug mode can probably use RTE_ASSERT() or so.
> > > Makes sense, I will change this in the next version.
> > >
> > > >
> > > >
> > > > lcore_id >= RTE_TQS_MAX_LCORE
> > > >
> > > > Is this limitation really necessary?
> > > I added this limitation because currently DPDK application cannot take
> > > a mask more than 64bit wide. Otherwise, this should be as big as
> > RTE_MAX_LCORE.
> > > I see that in the case of '-lcores' option, the number of lcores can
> > > be more than the number of PEs. In this case, we still need a MAX limit (but
> > can be bigger than 64).
> > >
> > > > First it means that only lcores can use that API (at least data-path
> > > > part), second even today many machines have more than 64 cores.
> > > > I think you can easily avoid such limitation, if instead of
> > > > requiring lcore_id as input parameter, you'll just make it return index of
> > next available entry in w[].
> > > > Then tqs_update() can take that index as input parameter.
> > > I had thought about a similar approach based on IDs. I was concerned
> > > that ID will be one more thing to manage for the application. But, I see the
> > limitations of the current approach now. I will change it to allocation based.
> > This will support even non-EAL pthreads as well.
> >
> > Yes, with such approach non-lcore threads will be able to use it also.
> >
> I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore need to be efficient as they can be called from the worker's packet
> processing loop (rte_event_dequeue_burst allows blocking. So, the worker thread needs to call rte_tqs_unregister_lcore before calling
> rte_event_dequeue_burst and rte_tqs_register_lcore before starting packet processing). Allocating the thread ID in these functions will
> make them more complex.
>
> I suggest that we change the argument 'lcore_id' to 'thread_id'. The application could use 'lcore_id' as 'thread_id' if threads are mapped to
> physical cores 1:1.
>
> If the threads are not mapped 1:1 to physical cores, the threads need to use a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do
> not see that DPDK has a thread_id concept. For TQS, the thread IDs are global (i.e. not per TQS variable). I could provide APIs to do the
> thread ID allocation, but I think the thread ID allocation should not be part of this library. Such thread ID might be useful for other libraries.
I don't think there is any point to introduce new thread_id concept just for that library.
After all we already have a concept of lcore_id which pretty much serves the same purpose.
I still think that we need to either:
a) make register/unregister to work with any valid lcore_id (<= RTE_MAX_LCORE)
b) make register/unregister to return index in w[]
For a) will need mask bigger than 64bits.
b) would allow to use data-path API by non-lcores threads too,
plus w[] would occupy less space, and check() might be faster.
Though yes, as a drawback, for b) register/unregister probably would need
extra 'while(CAS(...));' loop.
I suppose the question here do you foresee a lot of concurrent register/unregister
at data-path?
>
> <snip>
>
> >
> > >
> > > >
> > > > > +
> > > > > + while (lcore_mask) {
> > > > > + l = __builtin_ctz(lcore_mask);
> > > > > + if (v->w[l].cnt != t)
> > > > > + break;
> > > >
> > > > As I understand, that makes control-path function progress dependent
> > > > on simultaneous invocation of data-path functions.
> > > I agree that the control-path function progress (for ex: how long to
> > > wait for freeing the memory) depends on invocation of the data-path
> > > functions. The separation of 'start', 'check' and the option not to block in
> > 'check' provide the flexibility for control-path to do some other work if it
> > chooses to.
> > >
> > > > In some cases that might cause control-path to hang.
> > > > Let say if data-path function wouldn't be called, or user invokes
> > > > control-path and data-path functions from the same thread.
> > > I agree with the case of data-path function not getting called. I
> > > would consider that as programming error. I can document that warning in
> > the rte_tqs_check API.
> >
> > Sure, it can be documented.
> > Though that means, that each data-path thread would have to do explicit
> > update() call for every tqs it might use.
> > I just think that it would complicate things and might limit usage of the library
> > quite significantly.
> Each data path thread has to report its quiescent state. Hence, each data-path thread has to call update() (similar to how
> rte_timer_manage() has to be called periodically on the worker thread).
I understand that.
Though that means that each data-path thread has to know explicitly what rcu vars it accesses.
Would be hard to adopt such API with rcu vars used inside some library.
But ok, as I understand people do use QSBR approach in their apps and
find it useful.
> Do you have any particular use case in mind where this fails?
Let say it means that library can't be used to add/del RX/TX ethdev callbacks
in a safe manner.
BTW, two side questions:
1) As I understand what you propose is very similar to QSBR main concept.
Wouldn't it be better to name it accordingly to avoid confusion (or at least document it somewhere).
I think someone else already raised that question.
2) Would QSBR be the only technique in that lib?
Any plans to add something similar to GP one too (with MBs at reader-side)?
>
> >
> > >
> > > In the case of same thread calling both control-path and data-path
> > > functions, it would depend on the sequence of the calls. The following
> > sequence should not cause any hangs:
> > > Worker thread
> > > 1) 'deletes' an entry from a lock-free data structure
> > > 2) rte_tqs_start
> > > 3) rte_tqs_update
> > > 4) rte_tqs_check (wait == 1 or wait == 0)
> > > 5) 'free' the entry deleted in 1)
> >
> > That an interesting idea, and that should help, I think.
> > Probably worth to have {2,3,4} sequence as a new high level function.
> >
> Yes, this is a good idea. Such a function would be applicable only in the worker thread. I would prefer to leave it to the application to take
> care.
Yes, it would be applicable only to worker thread, but why we can't have a function for it?
Let say it could be 2 different functions: one doing {2,3,4} - for worker threads,
and second doing just {2,4} - for control threads.
Or it could be just one function that takes extra parameter: lcore_id/w[] index.
If it is some predefined invalid value (-1 or so), step #3 will be skipped.
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-12 9:29 ` Ananyev, Konstantin
@ 2018-12-13 7:39 ` Honnappa Nagarahalli
2018-12-17 13:14 ` Ananyev, Konstantin
0 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-13 7:39 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
nd, nd
>
>
> > > >
> > > > > > +
> > > > > > +/* Add a reader thread, running on an lcore, to the list of
> > > > > > +threads
> > > > > > + * reporting their quiescent state on a TQS variable.
> > > > > > + */
> > > > > > +int __rte_experimental
> > > > > > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > > > > > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > > > > RTE_TQS_MAX_LCORE),
> > > > > > + -EINVAL);
> > > > >
> > > > > It is not very good practice to make function return different
> > > > > values and behave in a different way in debug/non-debug mode.
> > > > > I'd say that for slow-path (functions in .c) it is always good
> > > > > to check input parameters.
> > > > > For fast-path (functions in .h) we sometimes skip such checking,
> > > > > but debug mode can probably use RTE_ASSERT() or so.
> > > > Makes sense, I will change this in the next version.
> > > >
> > > > >
> > > > >
> > > > > lcore_id >= RTE_TQS_MAX_LCORE
> > > > >
> > > > > Is this limitation really necessary?
> > > > I added this limitation because currently DPDK application cannot
> > > > take a mask more than 64bit wide. Otherwise, this should be as big
> > > > as
> > > RTE_MAX_LCORE.
> > > > I see that in the case of '-lcores' option, the number of lcores
> > > > can be more than the number of PEs. In this case, we still need a
> > > > MAX limit (but
> > > can be bigger than 64).
> > > >
> > > > > First it means that only lcores can use that API (at least
> > > > > data-path part), second even today many machines have more than 64
> cores.
> > > > > I think you can easily avoid such limitation, if instead of
> > > > > requiring lcore_id as input parameter, you'll just make it
> > > > > return index of
> > > next available entry in w[].
> > > > > Then tqs_update() can take that index as input parameter.
> > > > I had thought about a similar approach based on IDs. I was
> > > > concerned that ID will be one more thing to manage for the
> > > > application. But, I see the
> > > limitations of the current approach now. I will change it to allocation
> based.
> > > This will support even non-EAL pthreads as well.
> > >
> > > Yes, with such approach non-lcore threads will be able to use it also.
> > >
> > I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore need
> > to be efficient as they can be called from the worker's packet
> > processing loop (rte_event_dequeue_burst allows blocking. So, the
> > worker thread needs to call rte_tqs_unregister_lcore before calling
> rte_event_dequeue_burst and rte_tqs_register_lcore before starting packet
> processing). Allocating the thread ID in these functions will make them more
> complex.
> >
> > I suggest that we change the argument 'lcore_id' to 'thread_id'. The
> > application could use 'lcore_id' as 'thread_id' if threads are mapped to
> physical cores 1:1.
> >
> > If the threads are not mapped 1:1 to physical cores, the threads need
> > to use a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not
> > see that DPDK has a thread_id concept. For TQS, the thread IDs are global
> (i.e. not per TQS variable). I could provide APIs to do the thread ID allocation,
> but I think the thread ID allocation should not be part of this library. Such
> thread ID might be useful for other libraries.
>
> I don't think there is any point to introduce new thread_id concept just for
> that library.
Currently, we have rte_gettid API. It is being used by rte_spinlock. However, the thread ID returned here is the thread ID as defined by OS. rte_spinlock APIs do not care who defines the thread ID as long as those IDs are unique per thread. I think, if we have a thread_id concept that covers non-eal threads as well, it might help other libraries too. For ex: [1] talks about the limitation of per-lcore cache. I think this limitation can be removed easily if we could have a thread_id that is in a small, well defined space (rather than OS defined thread ID which may be an arbitrary number). I see similar issues mentioned for rte_timer.
It might be useful in the dynamic threads Bruce talked about at the Dublin summit (I am not sure on this one, just speculating).
[1] https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#known-issue-label
> After all we already have a concept of lcore_id which pretty much serves the
> same purpose.
> I still think that we need to either:
> a) make register/unregister to work with any valid lcore_id (<=
> RTE_MAX_LCORE)
I have made this change already, it will be there in the next version.
> b) make register/unregister to return index in w[]
>
> For a) will need mask bigger than 64bits.
> b) would allow to use data-path API by non-lcores threads too, plus w[]
> would occupy less space, and check() might be faster.
> Though yes, as a drawback, for b) register/unregister probably would need
> extra 'while(CAS(...));' loop.
Along with the CAS, we also need to search for available index in the array.
> I suppose the question here do you foresee a lot of concurrent
> register/unregister at data-path?
IMO, yes, because of the event dev API being blocking.
We can solve this by providing separate APIs for allocation/freeing of the IDs. I am just questioning where these APIs should be.
>
> >
> > <snip>
> >
> > >
> > > >
> > > > >
> > > > > > +
> > > > > > + while (lcore_mask) {
> > > > > > + l = __builtin_ctz(lcore_mask);
> > > > > > + if (v->w[l].cnt != t)
> > > > > > + break;
> > > > >
> > > > > As I understand, that makes control-path function progress
> > > > > dependent on simultaneous invocation of data-path functions.
> > > > I agree that the control-path function progress (for ex: how long
> > > > to wait for freeing the memory) depends on invocation of the
> > > > data-path functions. The separation of 'start', 'check' and the
> > > > option not to block in
> > > 'check' provide the flexibility for control-path to do some other
> > > work if it chooses to.
> > > >
> > > > > In some cases that might cause control-path to hang.
> > > > > Let say if data-path function wouldn't be called, or user
> > > > > invokes control-path and data-path functions from the same thread.
> > > > I agree with the case of data-path function not getting called. I
> > > > would consider that as programming error. I can document that
> > > > warning in
> > > the rte_tqs_check API.
> > >
> > > Sure, it can be documented.
> > > Though that means, that each data-path thread would have to do
> > > explicit
> > > update() call for every tqs it might use.
> > > I just think that it would complicate things and might limit usage
> > > of the library quite significantly.
> > Each data path thread has to report its quiescent state. Hence, each
> > data-path thread has to call update() (similar to how
> > rte_timer_manage() has to be called periodically on the worker thread).
>
> I understand that.
> Though that means that each data-path thread has to know explicitly what rcu
> vars it accesses.
Yes. That is correct. It is both good and bad. It is providing flexibility to reduce the overhead. For ex: in pipeline mode, it may be that a particular data structure is accessed only by some of the threads in the application. In this case, this library allows for per data structure vars, which reduces the over head. This applies for service cores as well.
> Would be hard to adopt such API with rcu vars used inside some library.
> But ok, as I understand people do use QSBR approach in their apps and find it
> useful.
It can be adopted in the library with different levels of assumptions/constraints.
1) With the assumption that the data plane threads will update the quiescent state. For ex: for rte_hash library we could ask the user to pass the TQS variable as input and rte_hash writer APIs can call rte_tqs_start and rte_tqs_check APIs.
2) If the assumption in 1) is not good, updating of the quiescent state can be hidden in the library, but again with the assumption that the data plane library API is called on a regular basis. For ex: the rte_tqs_update can be called within rte_hash_lookup API.
3) If we do not want to assume that the data plane API will be called on a regular basis, then the rte_tqs_register/unregister APIs need to be used before and after entering the critical section along with calling rte_tqs_update API. For ex: rte_hash_lookup should have the sequence rte_tqs_register, <critical section>, rte_tqs_unregister, rte_tqs_update. (very similar to GP)
>
> > Do you have any particular use case in mind where this fails?
>
> Let say it means that library can't be used to add/del RX/TX ethdev callbacks
> in a safe manner.
I need to understand this better. I will look at rte_ethdev library.
>
> BTW, two side questions:
> 1) As I understand what you propose is very similar to QSBR main concept.
> Wouldn't it be better to name it accordingly to avoid confusion (or at least
> document it somewhere).
> I think someone else already raised that question.
QSBR stands for Quiescent State Based Reclamation. This library already has 'Thread Quiescent State' in the name. Others have questioned/suggested why not use RCU instead. I called it thread quiescent state as this library just helps determine if all the readers have entered the quiescent state. It does not do anything else.
However, you are also bringing up an important point, 'will we add other methods of memory reclamation'? With that in mind, may be we should not call it RCU. But, may be call it as rte_rcu_qsbr_xxx? It will also future proof the API incase we want to add additional RCU types.
> 2) Would QSBR be the only technique in that lib?
> Any plans to add something similar to GP one too (with MBs at reader-side)?
I believe, by GP, you mean general-purpose RCU. In my understanding QSBR is the one with least overhead. For DPDK applications, I think reducing that overhead is important. The GP adds additional over head on the reader side. I did not see a need to add any additional ones as of now. But if there are use cases that cannot be achieved with the proposed APIs, we can definitely expand it.
>
> >
> > >
> > > >
> > > > In the case of same thread calling both control-path and data-path
> > > > functions, it would depend on the sequence of the calls. The
> > > > following
> > > sequence should not cause any hangs:
> > > > Worker thread
> > > > 1) 'deletes' an entry from a lock-free data structure
> > > > 2) rte_tqs_start
> > > > 3) rte_tqs_update
> > > > 4) rte_tqs_check (wait == 1 or wait == 0)
> > > > 5) 'free' the entry deleted in 1)
> > >
> > > That an interesting idea, and that should help, I think.
> > > Probably worth to have {2,3,4} sequence as a new high level function.
> > >
> > Yes, this is a good idea. Such a function would be applicable only in
> > the worker thread. I would prefer to leave it to the application to take care.
>
> Yes, it would be applicable only to worker thread, but why we can't have a
> function for it?
> Let say it could be 2 different functions: one doing {2,3,4} - for worker threads,
> and second doing just {2,4} - for control threads.
> Or it could be just one function that takes extra parameter: lcore_id/w[] index.
> If it is some predefined invalid value (-1 or so), step #3 will be skipped.
The rte_tqs_start and rte_tqs_check are separated into 2 APIs so that the writers do not have to spend CPU/memory cycles polling for the readers' quiescent state. In the context of DPDK, this overhead will be significant (at least equal to the length of 1 while loop on the worker core). This is one of the key features of this library. Combining 2,[3], 4 will defeat this purpose. For ex: in the rte_hash library, whenever a writer on the data path calls rte_hash_add, (with 2,3,4 combined) it will wait for the rest of the readers to enter quiescent state. i.e. the performance will come down whenever a rte_hash_add is called.
I am trying to understand your concern. If it is that, 'programmers may not use the right sequence', I would prefer to treat that as programming error. May be it is better addressed by providing debugging capabilities.
>
> Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-11 6:40 ` Honnappa Nagarahalli
@ 2018-12-13 12:26 ` Burakov, Anatoly
2018-12-18 4:30 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Burakov, Anatoly @ 2018-12-13 12:26 UTC (permalink / raw)
To: Honnappa Nagarahalli, Stephen Hemminger
Cc: Ananyev, Konstantin, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China)
On 11-Dec-18 6:40 AM, Honnappa Nagarahalli wrote:
>>
>>>>>
>>>>>>> +
>>>>>>> +/* Add a reader thread, running on an lcore, to the list of
>>>>>>> +threads
>>>>>>> + * reporting their quiescent state on a TQS variable.
>>>>>>> + */
>>>>>>> +int __rte_experimental
>>>>>>> +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
>>>>>>> + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
>>>>>> RTE_TQS_MAX_LCORE),
>>>>>>> + -EINVAL);
>>>>>>
>>>>>> It is not very good practice to make function return different
>>>>>> values and behave in a different way in debug/non-debug mode.
>>>>>> I'd say that for slow-path (functions in .c) it is always good
>>>>>> to check input parameters.
>>>>>> For fast-path (functions in .h) we sometimes skip such checking,
>>>>>> but debug mode can probably use RTE_ASSERT() or so.
>>>>> Makes sense, I will change this in the next version.
>>>>>
>>>>>>
>>>>>>
>>>>>> lcore_id >= RTE_TQS_MAX_LCORE
>>>>>>
>>>>>> Is this limitation really necessary?
>>>>> I added this limitation because currently DPDK application cannot
>>>>> take a mask more than 64bit wide. Otherwise, this should be as big
>>>>> as
>>>> RTE_MAX_LCORE.
>>>>> I see that in the case of '-lcores' option, the number of lcores
>>>>> can be more than the number of PEs. In this case, we still need a
>>>>> MAX limit (but
>>>> can be bigger than 64).
>>>>>
>>>>>> First it means that only lcores can use that API (at least
>>>>>> data-path part), second even today many machines have more than 64
>> cores.
>>>>>> I think you can easily avoid such limitation, if instead of
>>>>>> requiring lcore_id as input parameter, you'll just make it
>>>>>> return index of
>>>> next available entry in w[].
>>>>>> Then tqs_update() can take that index as input parameter.
>>>>> I had thought about a similar approach based on IDs. I was
>>>>> concerned that ID will be one more thing to manage for the
>>>>> application. But, I see the
>>>> limitations of the current approach now. I will change it to allocation
>> based.
>>>> This will support even non-EAL pthreads as well.
>>>>
>>>> Yes, with such approach non-lcore threads will be able to use it also.
>>>>
>>> I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore need to be
>> efficient as they can be called from the worker's packet processing loop
>> (rte_event_dequeue_burst allows blocking. So, the worker thread needs to
>> call rte_tqs_unregister_lcore before calling rte_event_dequeue_burst and
>> rte_tqs_register_lcore before starting packet processing). Allocating the
>> thread ID in these functions will make them more complex.
>>>
>>> I suggest that we change the argument 'lcore_id' to 'thread_id'. The
>> application could use 'lcore_id' as 'thread_id' if threads are mapped to
>> physical cores 1:1.
>>>
>>> If the threads are not mapped 1:1 to physical cores, the threads need to use
>> a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not see that
>> DPDK has a thread_id concept. For TQS, the thread IDs are global (i.e. not per
>> TQS variable). I could provide APIs to do the thread ID allocation, but I think
>> the thread ID allocation should not be part of this library. Such thread ID
>> might be useful for other libraries.
>>>
>>> <snip
>>
>>
>> Thread id is problematic since Glibc doesn't want to give it out.
>> You have to roll your own function to do gettid().
>> It is not as easy as just that. Plus what about preemption?
>
> Agree. I looked into this further. The rte_gettid function uses a system call (BSD and Linux). I am not clear on the space of the ID returned (as well). I do not think it is guaranteed that it will be with in a narrow range that is required here.
>
> My suggestion would be to add a set of APIs that would allow for allocation of thread IDs which are within a given range of 0 to <predefined MAX>
>
System-provided thread-ID's would probably also be potentially
non-unique in multiprocess scenario?
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-13 7:39 ` Honnappa Nagarahalli
@ 2018-12-17 13:14 ` Ananyev, Konstantin
0 siblings, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2018-12-17 13:14 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
nd, nd
> >
> > > > >
> > > > > > > +
> > > > > > > +/* Add a reader thread, running on an lcore, to the list of
> > > > > > > +threads
> > > > > > > + * reporting their quiescent state on a TQS variable.
> > > > > > > + */
> > > > > > > +int __rte_experimental
> > > > > > > +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > > > > > > + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > > > > > RTE_TQS_MAX_LCORE),
> > > > > > > + -EINVAL);
> > > > > >
> > > > > > It is not very good practice to make function return different
> > > > > > values and behave in a different way in debug/non-debug mode.
> > > > > > I'd say that for slow-path (functions in .c) it is always good
> > > > > > to check input parameters.
> > > > > > For fast-path (functions in .h) we sometimes skip such checking,
> > > > > > but debug mode can probably use RTE_ASSERT() or so.
> > > > > Makes sense, I will change this in the next version.
> > > > >
> > > > > >
> > > > > >
> > > > > > lcore_id >= RTE_TQS_MAX_LCORE
> > > > > >
> > > > > > Is this limitation really necessary?
> > > > > I added this limitation because currently DPDK application cannot
> > > > > take a mask more than 64bit wide. Otherwise, this should be as big
> > > > > as
> > > > RTE_MAX_LCORE.
> > > > > I see that in the case of '-lcores' option, the number of lcores
> > > > > can be more than the number of PEs. In this case, we still need a
> > > > > MAX limit (but
> > > > can be bigger than 64).
> > > > >
> > > > > > First it means that only lcores can use that API (at least
> > > > > > data-path part), second even today many machines have more than 64
> > cores.
> > > > > > I think you can easily avoid such limitation, if instead of
> > > > > > requiring lcore_id as input parameter, you'll just make it
> > > > > > return index of
> > > > next available entry in w[].
> > > > > > Then tqs_update() can take that index as input parameter.
> > > > > I had thought about a similar approach based on IDs. I was
> > > > > concerned that ID will be one more thing to manage for the
> > > > > application. But, I see the
> > > > limitations of the current approach now. I will change it to allocation
> > based.
> > > > This will support even non-EAL pthreads as well.
> > > >
> > > > Yes, with such approach non-lcore threads will be able to use it also.
> > > >
> > > I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore need
> > > to be efficient as they can be called from the worker's packet
> > > processing loop (rte_event_dequeue_burst allows blocking. So, the
> > > worker thread needs to call rte_tqs_unregister_lcore before calling
> > rte_event_dequeue_burst and rte_tqs_register_lcore before starting packet
> > processing). Allocating the thread ID in these functions will make them more
> > complex.
> > >
> > > I suggest that we change the argument 'lcore_id' to 'thread_id'. The
> > > application could use 'lcore_id' as 'thread_id' if threads are mapped to
> > physical cores 1:1.
> > >
> > > If the threads are not mapped 1:1 to physical cores, the threads need
> > > to use a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not
> > > see that DPDK has a thread_id concept. For TQS, the thread IDs are global
> > (i.e. not per TQS variable). I could provide APIs to do the thread ID allocation,
> > but I think the thread ID allocation should not be part of this library. Such
> > thread ID might be useful for other libraries.
> >
> > I don't think there is any point to introduce new thread_id concept just for
> > that library.
> Currently, we have rte_gettid API. It is being used by rte_spinlock. However, the thread ID returned here is the thread ID as defined by OS.
> rte_spinlock APIs do not care who defines the thread ID as long as those IDs are unique per thread. I think, if we have a thread_id concept
> that covers non-eal threads as well, it might help other libraries too. For ex: [1] talks about the limitation of per-lcore cache.
> I think this
> limitation can be removed easily if we could have a thread_id that is in a small, well defined space (rather than OS defined thread ID which
> may be an arbitrary number). I see similar issues mentioned for rte_timer.
If we'll just introduce new ID (let's name it thread_id) then we'll just replace one limitation with the other.
If it still would be local_cache[], now based on some thread_id instead of current lcore_id.
I don't see how it will be better than current one.
To make any arbitrary thread to use mempool's cache we need something smarter
then just local_cache[] for each id, but without loss of performance.
> It might be useful in the dynamic threads Bruce talked about at the Dublin summit (I am not sure on this one, just speculating).
That's probably about make lcore_id allocation/freeing to be dynamic.
>
> [1] https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#known-issue-label
>
> > After all we already have a concept of lcore_id which pretty much serves the
> > same purpose.
> > I still think that we need to either:
> > a) make register/unregister to work with any valid lcore_id (<=
> > RTE_MAX_LCORE)
> I have made this change already, it will be there in the next version.
Ok.
>
> > b) make register/unregister to return index in w[]
> >
> > For a) will need mask bigger than 64bits.
> > b) would allow to use data-path API by non-lcores threads too, plus w[]
> > would occupy less space, and check() might be faster.
> > Though yes, as a drawback, for b) register/unregister probably would need
> > extra 'while(CAS(...));' loop.
> Along with the CAS, we also need to search for available index in the array.
Sure, but I thought that one is relatively cheap comparing to CAS itself
(probably not, as cache line with data will be shared between cores).
>
> > I suppose the question here do you foresee a lot of concurrent
> > register/unregister at data-path?
> IMO, yes, because of the event dev API being blocking.
> We can solve this by providing separate APIs for allocation/freeing of the IDs. I am just questioning where these APIs should be.
>
> >
> > >
> > > <snip>
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > +
> > > > > > > + while (lcore_mask) {
> > > > > > > + l = __builtin_ctz(lcore_mask);
> > > > > > > + if (v->w[l].cnt != t)
> > > > > > > + break;
> > > > > >
> > > > > > As I understand, that makes control-path function progress
> > > > > > dependent on simultaneous invocation of data-path functions.
> > > > > I agree that the control-path function progress (for ex: how long
> > > > > to wait for freeing the memory) depends on invocation of the
> > > > > data-path functions. The separation of 'start', 'check' and the
> > > > > option not to block in
> > > > 'check' provide the flexibility for control-path to do some other
> > > > work if it chooses to.
> > > > >
> > > > > > In some cases that might cause control-path to hang.
> > > > > > Let say if data-path function wouldn't be called, or user
> > > > > > invokes control-path and data-path functions from the same thread.
> > > > > I agree with the case of data-path function not getting called. I
> > > > > would consider that as programming error. I can document that
> > > > > warning in
> > > > the rte_tqs_check API.
> > > >
> > > > Sure, it can be documented.
> > > > Though that means, that each data-path thread would have to do
> > > > explicit
> > > > update() call for every tqs it might use.
> > > > I just think that it would complicate things and might limit usage
> > > > of the library quite significantly.
> > > Each data path thread has to report its quiescent state. Hence, each
> > > data-path thread has to call update() (similar to how
> > > rte_timer_manage() has to be called periodically on the worker thread).
> >
> > I understand that.
> > Though that means that each data-path thread has to know explicitly what rcu
> > vars it accesses.
> Yes. That is correct. It is both good and bad. It is providing flexibility to reduce the overhead. For ex: in pipeline mode, it may be that a
> particular data structure is accessed only by some of the threads in the application. In this case, this library allows for per data structure
> vars, which reduces the over head. This applies for service cores as well.
>
> > Would be hard to adopt such API with rcu vars used inside some library.
> > But ok, as I understand people do use QSBR approach in their apps and find it
> > useful.
> It can be adopted in the library with different levels of assumptions/constraints.
> 1) With the assumption that the data plane threads will update the quiescent state. For ex: for rte_hash library we could ask the user to pass
> the TQS variable as input and rte_hash writer APIs can call rte_tqs_start and rte_tqs_check APIs.
> 2) If the assumption in 1) is not good, updating of the quiescent state can be hidden in the library, but again with the assumption that the
> data plane library API is called on a regular basis. For ex: the rte_tqs_update can be called within rte_hash_lookup API.
> 3) If we do not want to assume that the data plane API will be called on a regular basis, then the rte_tqs_register/unregister APIs need to be
> used before and after entering the critical section along with calling rte_tqs_update API. For ex: rte_hash_lookup should have the sequence
> rte_tqs_register, <critical section>, rte_tqs_unregister, rte_tqs_update. (very similar to GP)
#3 is surely possible but it seems quite expensive.
Anyway, as I said before, people do use QSBR approach -
it has the small overhead for readers and relatively straightforward.
So let start with that one, and have some ability to extend the lib
with new methods in future.
>
> >
> > > Do you have any particular use case in mind where this fails?
> >
> > Let say it means that library can't be used to add/del RX/TX ethdev callbacks
> > in a safe manner.
> I need to understand this better. I will look at rte_ethdev library.
Ok, you can also have a look at: lib/librte_bpf/bpf_pkt.c
to check how we overcome it now.
>
> >
> > BTW, two side questions:
> > 1) As I understand what you propose is very similar to QSBR main concept.
> > Wouldn't it be better to name it accordingly to avoid confusion (or at least
> > document it somewhere).
> > I think someone else already raised that question.
> QSBR stands for Quiescent State Based Reclamation. This library already has 'Thread Quiescent State' in the name. Others have
> questioned/suggested why not use RCU instead. I called it thread quiescent state as this library just helps determine if all the readers have
> entered the quiescent state. It does not do anything else.
>
> However, you are also bringing up an important point, 'will we add other methods of memory reclamation'? With that in mind, may be we
> should not call it RCU. But, may be call it as rte_rcu_qsbr_xxx? It will also future proof the API incase we want to add additional RCU types.
Yep, that sounds like a good approach to me.
>
> > 2) Would QSBR be the only technique in that lib?
> > Any plans to add something similar to GP one too (with MBs at reader-side)?
> I believe, by GP, you mean general-purpose RCU.
Yes.
> In my understanding QSBR is the one with least overhead. For DPDK applications, I think
> reducing that overhead is important. The GP adds additional over head on the reader side.
Yes, but it provides better flexibility.
> I did not see a need to add any additional ones as of now.
Let say your #3 solution above.
I think GP will be cheaper than register/unregister for each library call.
> But if there are use cases that cannot be achieved with the proposed APIs, we can definitely expand it.
>
> >
> > >
> > > >
> > > > >
> > > > > In the case of same thread calling both control-path and data-path
> > > > > functions, it would depend on the sequence of the calls. The
> > > > > following
> > > > sequence should not cause any hangs:
> > > > > Worker thread
> > > > > 1) 'deletes' an entry from a lock-free data structure
> > > > > 2) rte_tqs_start
> > > > > 3) rte_tqs_update
> > > > > 4) rte_tqs_check (wait == 1 or wait == 0)
> > > > > 5) 'free' the entry deleted in 1)
> > > >
> > > > That an interesting idea, and that should help, I think.
> > > > Probably worth to have {2,3,4} sequence as a new high level function.
> > > >
> > > Yes, this is a good idea. Such a function would be applicable only in
> > > the worker thread. I would prefer to leave it to the application to take care.
> >
> > Yes, it would be applicable only to worker thread, but why we can't have a
> > function for it?
> > Let say it could be 2 different functions: one doing {2,3,4} - for worker threads,
> > and second doing just {2,4} - for control threads.
> > Or it could be just one function that takes extra parameter: lcore_id/w[] index.
> > If it is some predefined invalid value (-1 or so), step #3 will be skipped.
> The rte_tqs_start and rte_tqs_check are separated into 2 APIs so that the writers do not have to spend CPU/memory cycles polling for the
> readers' quiescent state. In the context of DPDK, this overhead will be significant (at least equal to the length of 1 while loop on the worker
> core). This is one of the key features of this library. Combining 2,[3], 4 will defeat this purpose. For ex: in the rte_hash library, whenever a
> writer on the data path calls rte_hash_add, (with 2,3,4 combined) it will wait for the rest of the readers to enter quiescent state. i.e. the
> performance will come down whenever a rte_hash_add is called.
I am not suggesting to replace start+[update+]check with one mega-function.
NP with currently defined API
I am talking about an additional function for the users where performance is not a main concern -
they just need a function that would do things in a proper way for themc.
I think having such extra function will simplify their life,
again they can use it as a reference to understand the proper sequence they need to call on their own.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-13 12:26 ` Burakov, Anatoly
@ 2018-12-18 4:30 ` Honnappa Nagarahalli
2018-12-18 6:31 ` Stephen Hemminger
0 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-18 4:30 UTC (permalink / raw)
To: Burakov, Anatoly, Stephen Hemminger
Cc: Ananyev, Konstantin, dev, nd, Dharmik Thakkar, Malvika Gupta,
Gavin Hu (Arm Technology China),
nd
>
> On 11-Dec-18 6:40 AM, Honnappa Nagarahalli wrote:
> >>
> >>>>>
> >>>>>>> +
> >>>>>>> +/* Add a reader thread, running on an lcore, to the list of
> >>>>>>> +threads
> >>>>>>> + * reporting their quiescent state on a TQS variable.
> >>>>>>> + */
> >>>>>>> +int __rte_experimental
> >>>>>>> +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> >>>>>>> + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> >>>>>> RTE_TQS_MAX_LCORE),
> >>>>>>> + -EINVAL);
> >>>>>>
> >>>>>> It is not very good practice to make function return different
> >>>>>> values and behave in a different way in debug/non-debug mode.
> >>>>>> I'd say that for slow-path (functions in .c) it is always good to
> >>>>>> check input parameters.
> >>>>>> For fast-path (functions in .h) we sometimes skip such checking,
> >>>>>> but debug mode can probably use RTE_ASSERT() or so.
> >>>>> Makes sense, I will change this in the next version.
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> lcore_id >= RTE_TQS_MAX_LCORE
> >>>>>>
> >>>>>> Is this limitation really necessary?
> >>>>> I added this limitation because currently DPDK application cannot
> >>>>> take a mask more than 64bit wide. Otherwise, this should be as big
> >>>>> as
> >>>> RTE_MAX_LCORE.
> >>>>> I see that in the case of '-lcores' option, the number of lcores
> >>>>> can be more than the number of PEs. In this case, we still need a
> >>>>> MAX limit (but
> >>>> can be bigger than 64).
> >>>>>
> >>>>>> First it means that only lcores can use that API (at least
> >>>>>> data-path part), second even today many machines have more than
> >>>>>> 64
> >> cores.
> >>>>>> I think you can easily avoid such limitation, if instead of
> >>>>>> requiring lcore_id as input parameter, you'll just make it return
> >>>>>> index of
> >>>> next available entry in w[].
> >>>>>> Then tqs_update() can take that index as input parameter.
> >>>>> I had thought about a similar approach based on IDs. I was
> >>>>> concerned that ID will be one more thing to manage for the
> >>>>> application. But, I see the
> >>>> limitations of the current approach now. I will change it to
> >>>> allocation
> >> based.
> >>>> This will support even non-EAL pthreads as well.
> >>>>
> >>>> Yes, with such approach non-lcore threads will be able to use it also.
> >>>>
> >>> I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore
> >>> need to be
> >> efficient as they can be called from the worker's packet processing
> >> loop (rte_event_dequeue_burst allows blocking. So, the worker thread
> >> needs to call rte_tqs_unregister_lcore before calling
> >> rte_event_dequeue_burst and rte_tqs_register_lcore before starting
> >> packet processing). Allocating the thread ID in these functions will make
> them more complex.
> >>>
> >>> I suggest that we change the argument 'lcore_id' to 'thread_id'. The
> >> application could use 'lcore_id' as 'thread_id' if threads are mapped
> >> to physical cores 1:1.
> >>>
> >>> If the threads are not mapped 1:1 to physical cores, the threads
> >>> need to use
> >> a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not see
> >> that DPDK has a thread_id concept. For TQS, the thread IDs are global
> >> (i.e. not per TQS variable). I could provide APIs to do the thread ID
> >> allocation, but I think the thread ID allocation should not be part
> >> of this library. Such thread ID might be useful for other libraries.
> >>>
> >>> <snip
> >>
> >>
> >> Thread id is problematic since Glibc doesn't want to give it out.
> >> You have to roll your own function to do gettid().
> >> It is not as easy as just that. Plus what about preemption?
> >
> > Agree. I looked into this further. The rte_gettid function uses a system call
> (BSD and Linux). I am not clear on the space of the ID returned (as well). I do
> not think it is guaranteed that it will be with in a narrow range that is required
> here.
> >
> > My suggestion would be to add a set of APIs that would allow for
> > allocation of thread IDs which are within a given range of 0 to
> > <predefined MAX>
> >
>
> System-provided thread-ID's would probably also be potentially non-unique in
> multiprocess scenario?
For linux, rte_gettid is implemented as:
int rte_sys_gettid(void)
{
return (int)syscall(SYS_gettid);
}
Came across [1] which states, thread-IDs are unique across the system.
For BSD, thr_self is used. [2] says it provides system wide unique thread IDs.
[1] https://stackoverflow.com/questions/6372102/what-is-the-difference-between-pthread-self-and-gettid-which-one-should-i-u
[2] https://nxmnpg.lemoda.net/2/thr_self
>
> --
> Thanks,
> Anatoly
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library
2018-12-18 4:30 ` Honnappa Nagarahalli
@ 2018-12-18 6:31 ` Stephen Hemminger
0 siblings, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2018-12-18 6:31 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Burakov, Anatoly, Ananyev, Konstantin, dev, nd, Dharmik Thakkar,
Malvika Gupta, Gavin Hu (Arm Technology China)
On Tue, 18 Dec 2018 04:30:39 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > On 11-Dec-18 6:40 AM, Honnappa Nagarahalli wrote:
> > >>
> > >>>>>
> > >>>>>>> +
> > >>>>>>> +/* Add a reader thread, running on an lcore, to the list of
> > >>>>>>> +threads
> > >>>>>>> + * reporting their quiescent state on a TQS variable.
> > >>>>>>> + */
> > >>>>>>> +int __rte_experimental
> > >>>>>>> +rte_tqs_register_lcore(struct rte_tqs *v, unsigned int lcore_id) {
> > >>>>>>> + TQS_RETURN_IF_TRUE((v == NULL || lcore_id >=
> > >>>>>> RTE_TQS_MAX_LCORE),
> > >>>>>>> + -EINVAL);
> > >>>>>>
> > >>>>>> It is not very good practice to make function return different
> > >>>>>> values and behave in a different way in debug/non-debug mode.
> > >>>>>> I'd say that for slow-path (functions in .c) it is always good to
> > >>>>>> check input parameters.
> > >>>>>> For fast-path (functions in .h) we sometimes skip such checking,
> > >>>>>> but debug mode can probably use RTE_ASSERT() or so.
> > >>>>> Makes sense, I will change this in the next version.
> > >>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> lcore_id >= RTE_TQS_MAX_LCORE
> > >>>>>>
> > >>>>>> Is this limitation really necessary?
> > >>>>> I added this limitation because currently DPDK application cannot
> > >>>>> take a mask more than 64bit wide. Otherwise, this should be as big
> > >>>>> as
> > >>>> RTE_MAX_LCORE.
> > >>>>> I see that in the case of '-lcores' option, the number of lcores
> > >>>>> can be more than the number of PEs. In this case, we still need a
> > >>>>> MAX limit (but
> > >>>> can be bigger than 64).
> > >>>>>
> > >>>>>> First it means that only lcores can use that API (at least
> > >>>>>> data-path part), second even today many machines have more than
> > >>>>>> 64
> > >> cores.
> > >>>>>> I think you can easily avoid such limitation, if instead of
> > >>>>>> requiring lcore_id as input parameter, you'll just make it return
> > >>>>>> index of
> > >>>> next available entry in w[].
> > >>>>>> Then tqs_update() can take that index as input parameter.
> > >>>>> I had thought about a similar approach based on IDs. I was
> > >>>>> concerned that ID will be one more thing to manage for the
> > >>>>> application. But, I see the
> > >>>> limitations of the current approach now. I will change it to
> > >>>> allocation
> > >> based.
> > >>>> This will support even non-EAL pthreads as well.
> > >>>>
> > >>>> Yes, with such approach non-lcore threads will be able to use it also.
> > >>>>
> > >>> I realized that rte_tqs_register_lcore/ rte_tqs_unregister_lcore
> > >>> need to be
> > >> efficient as they can be called from the worker's packet processing
> > >> loop (rte_event_dequeue_burst allows blocking. So, the worker thread
> > >> needs to call rte_tqs_unregister_lcore before calling
> > >> rte_event_dequeue_burst and rte_tqs_register_lcore before starting
> > >> packet processing). Allocating the thread ID in these functions will make
> > them more complex.
> > >>>
> > >>> I suggest that we change the argument 'lcore_id' to 'thread_id'. The
> > >> application could use 'lcore_id' as 'thread_id' if threads are mapped
> > >> to physical cores 1:1.
> > >>>
> > >>> If the threads are not mapped 1:1 to physical cores, the threads
> > >>> need to use
> > >> a thread_id in the range of 0 - RTE_TQS_MAX_THREADS. I do not see
> > >> that DPDK has a thread_id concept. For TQS, the thread IDs are global
> > >> (i.e. not per TQS variable). I could provide APIs to do the thread ID
> > >> allocation, but I think the thread ID allocation should not be part
> > >> of this library. Such thread ID might be useful for other libraries.
> > >>>
> > >>> <snip
> > >>
> > >>
> > >> Thread id is problematic since Glibc doesn't want to give it out.
> > >> You have to roll your own function to do gettid().
> > >> It is not as easy as just that. Plus what about preemption?
> > >
> > > Agree. I looked into this further. The rte_gettid function uses a system call
> > (BSD and Linux). I am not clear on the space of the ID returned (as well). I do
> > not think it is guaranteed that it will be with in a narrow range that is required
> > here.
> > >
> > > My suggestion would be to add a set of APIs that would allow for
> > > allocation of thread IDs which are within a given range of 0 to
> > > <predefined MAX>
> > >
> >
> > System-provided thread-ID's would probably also be potentially non-unique in
> > multiprocess scenario?
> For linux, rte_gettid is implemented as:
> int rte_sys_gettid(void)
> {
> return (int)syscall(SYS_gettid);
> }
>
> Came across [1] which states, thread-IDs are unique across the system.
>
> For BSD, thr_self is used. [2] says it provides system wide unique thread IDs.
>
> [1] https://stackoverflow.com/questions/6372102/what-is-the-difference-between-pthread-self-and-gettid-which-one-should-i-u
> [2] https://nxmnpg.lemoda.net/2/thr_self
>
> >
> > --
> > Thanks,
> > Anatoly
Using thread id directly on Linux is battling against the glibc gods wishes.
Bad things may come to those that disobey them :-)
But really many libraries need to do the same thing, it is worth looking around.
The bigger issue is pid and thread id recycling with the limited range allowed.
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v2 0/2] rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (4 preceding siblings ...)
2018-11-27 22:28 ` Stephen Hemminger
@ 2018-12-22 2:14 ` Honnappa Nagarahalli
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 1/2] " Honnappa Nagarahalli
` (2 more replies)
2019-03-19 4:52 ` [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (8 subsequent siblings)
14 siblings, 3 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-22 2:14 UTC (permalink / raw)
To: dev, konstantin.ananyev, stephen, paulmck, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, nd
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures simultaneously. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 acesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencng D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the over head, of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. One has to understand how grace period and critical section
affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no shared
data structures are getting accessed) act as perfect quiescent states. This
will combine all the shared data structure accesses into a single, large
critical section which helps keep the over head on the reader side to
a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. To provide the required flexibility, this
library has a concept of QS variable. The application can create one
QS variable per data structure to help it track the end of grace
period for each data structure.
The application can initialize a QS variable using the API rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
RTE_RCU_MAX_THREADS. The application could also use lcore_id as the
thread ID where applicable.
rte_rcu_qsbr_register_thread API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
However, the application must ensure that the reader thread is ready to
report the QS status before the writer checks the QS.
The application can trigger the reader threads to report their QS
status by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The application has to call rte_rcu_qsbr_check API with the token to get the
current QS status. Option to block till all the reader threads enter the
QS is provided. If this API indicates that all the reader threads have entered
the QS, the application can free the deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as wroker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the QS. This reduces the memory accesses due
to continuous polling for the status.
rte_rcu_qsbr_unregister_thread API will remove a reader thread from reporting
its QS. The rte_rcu_qsbr_check API will not wait for this reader thread to
report the QS status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Next Steps:
1) rte_rcu_qsbr_register_thread/rte_rcu_qsbr_unregister_thread can be
optimized to avoid accessing the common bitmap array. This is required
as these are data plane APIs. Plan is to introduce
rte_rcu_qsbr_thread_online/rte_rcu_qsbr_thread_offline which will not
touch the common bitmap array.
2) Add debug logs to enable debugging
3) Documentation
4) Convert to patch
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (1):
rcu: add RCU library supporting QSBR mechanism
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 63 +++
lib/librte_rcu/rte_rcu_qsbr.h | 321 ++++++++++++
lib/librte_rcu/rte_rcu_version.map | 8 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
test/test/Makefile | 2 +
test/test/autotest_data.py | 6 +
test/test/meson.build | 5 +-
test/test/test_rcu_qsbr.c | 801 +++++++++++++++++++++++++++++
13 files changed, 1243 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
create mode 100644 test/test/test_rcu_qsbr.c
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 0/2] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2018-12-22 2:14 ` Honnappa Nagarahalli
2019-01-15 11:39 ` Ananyev, Konstantin
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-22 2:14 UTC (permalink / raw)
To: dev, konstantin.ananyev, stephen, paulmck, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, nd
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 63 ++++++
lib/librte_rcu/rte_rcu_qsbr.h | 321 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 8 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
9 files changed, 430 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/config/common_base b/config/common_base
index d12ae98bc..e148549d8 100644
--- a/config/common_base
+++ b/config/common_base
@@ -792,6 +792,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index b7370ef97..b674662c8 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -108,6 +108,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..3c2577ee2
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Initialize a quiescent state variable */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v)
+{
+ memset(v, 0, sizeof(struct rte_rcu_qsbr));
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ RTE_ASSERT(v == NULL || f == NULL);
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < RTE_QSBR_BIT_MAP_ELEMS; i++)
+ fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < RTE_QSBR_BIT_MAP_ELEMS; i++) {
+ bmap = __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(&v->w[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..c818e77fd
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,321 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to shared memory.
+ * A critical section for a data structure can be a quiescent state for
+ * another data structure.
+ *
+ * This library provides the ability to identify quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+/**< Maximum number of reader threads supported. */
+#define RTE_RCU_MAX_THREADS 128
+
+#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
+#error RTE_RCU_MAX_THREADS must be a power of 2
+#endif
+
+/**< Number of array elements required for the bit-map */
+#define RTE_QSBR_BIT_MAP_ELEMS (RTE_RCU_MAX_THREADS/(sizeof(uint64_t) * 8))
+
+/* Thread IDs are stored as a bitmap of 64b element array. Given thread id
+ * needs to be converted to index into the array and the id within
+ * the array element.
+ */
+#define RTE_QSBR_THR_INDEX_SHIFT 6
+#define RTE_QSBR_THR_ID_MASK 0x3f
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt; /**< Quiescent state counter. */
+} __rte_cache_aligned;
+
+/**
+ * RTE thread Quiescent State structure.
+ */
+struct rte_rcu_qsbr {
+ uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS] __rte_cache_aligned;
+ /**< Registered reader thread IDs - reader threads reporting
+ * on this QS variable represented in a bit map.
+ */
+
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple simultaneous QS queries */
+
+ struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS] __rte_cache_aligned;
+ /**< QS counter for each reader thread, counts upto
+ * current value of token.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ *
+ */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a reader thread, to the list of threads reporting their quiescent
+ * state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_update. This can be called
+ * during initialization or as part of the packet processing loop.
+ * Any ongoing QS queries may wait for the status from this registered
+ * thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_register_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+
+ id = thread_id & RTE_QSBR_THR_ID_MASK;
+ i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
+
+ /* Worker thread has to count the quiescent states
+ * only from the current value of token.
+ * __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->w[thread_id].cnt,
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE),
+ __ATOMIC_RELAXED);
+
+ /* Release the store to initial TQS count so that readers
+ * can use it immediately after this function returns.
+ */
+ __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing QS queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unregister_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+
+ id = thread_id & RTE_QSBR_THR_ID_MASK;
+ i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
+
+ /* Make sure the removal of the thread from the list of
+ * reporting threads is visible before the thread
+ * does anything else.
+ */
+ __atomic_fetch_and(&v->reg_thread_id[i],
+ ~(1UL << id), __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Trigger the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * TQS variable
+ * @param n
+ * Expected number of times the quiescent state is entered
+ * @param t
+ * - If successful, this is the token for this call of the API.
+ * This should be passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v, unsigned int n, uint64_t *t)
+{
+ RTE_ASSERT(v == NULL || t == NULL);
+
+ /* This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+
+ /* Load the token before the reader thread loads any other
+ * (lock-free) data structure. This ensures that updates
+ * to the data structures are visible if the update
+ * to token is visible.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Relaxed load/store on the counter is enough as we are
+ * reporting an already completed quiescent state.
+ * __atomic_load_n(cnt, __ATOMIC_RELAXED) is used as 'cnt' (64b)
+ * is accessed atomically.
+ * Copy the current token value. This will end grace period
+ * of multiple concurrent writers.
+ */
+ if (__atomic_load_n(&v->w[thread_id].cnt, __ATOMIC_RELAXED) != t)
+ __atomic_store_n(&v->w[thread_id].cnt, t, __ATOMIC_RELAXED);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state 'n' number of times
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+
+ RTE_ASSERT(v == NULL);
+
+ i = 0;
+ do {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THR_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED)
+ * is used to ensure 'cnt' (64b) is accessed
+ * atomically.
+ */
+ if (unlikely(__atomic_load_n(&v->w[id + j].cnt,
+ __ATOMIC_RELAXED) < t)) {
+ /* This thread is not in QS */
+ if (!wait)
+ return 0;
+
+ /* Loop till this thread enters QS */
+ rte_pause();
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+
+ i++;
+ } while (i < RTE_QSBR_BIT_MAP_ELEMS);
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..0df2071be
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,8 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index bb7f443f9..d0e49d4e1 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -21,7 +21,7 @@ libraries = [ 'compat', # just a header, used for versioning
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'meter', 'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'rcu', 'vhost',
# add pkt framework libs which use other libs from above
'port', 'table', 'pipeline',
# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 3ebc4e64c..d4a1d436a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -92,6 +92,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 0/2] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 1/2] " Honnappa Nagarahalli
@ 2018-12-22 2:14 ` Honnappa Nagarahalli
2018-12-23 7:30 ` Stephen Hemminger
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2018-12-22 2:14 UTC (permalink / raw)
To: dev, konstantin.ananyev, stephen, paulmck, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, nd, Malvika Gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases and functional tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
test/test/Makefile | 2 +
test/test/autotest_data.py | 6 +
test/test/meson.build | 5 +-
test/test/test_rcu_qsbr.c | 801 +++++++++++++++++++++++++++++++++++++
4 files changed, 813 insertions(+), 1 deletion(-)
create mode 100644 test/test/test_rcu_qsbr.c
diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..dfc0325e4 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c
+
CFLAGS += -DALLOW_EXPERIMENTAL_API
CFLAGS += -O3
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index 0fb7866db..cbd1f94ad 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -676,6 +676,12 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/test/test/meson.build b/test/test/meson.build
index 554e9945f..3be21be27 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -100,6 +100,7 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -122,7 +123,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
test_names = [
@@ -228,6 +230,7 @@ test_names = [
'timer_autotest',
'timer_perf__autotest',
'timer_racecond_autotest',
+ 'rcu_qsbr_autotest',
'user_delay_us',
'version_autotest',
]
diff --git a/test/test/test_rcu_qsbr.c b/test/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..c9efd3e21
--- /dev/null
+++ b/test/test/test_rcu_qsbr.c
@@ -0,0 +1,801 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define RTE_RCU_MAX_LCORE 64
+uint16_t enabled_core_ids[RTE_RCU_MAX_LCORE];
+uint8_t num_cores;
+uint16_t num_1qs = 1; /* Number of quiescent states = 1 */
+uint16_t num_2qs = 2; /* Number of quiescent states = 2 */
+uint16_t num_3qs = 3; /* Number of quiescent states = 3 */
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+uint32_t *hash_data[RTE_RCU_MAX_LCORE][TOTAL_ENTRY];
+uint8_t writer_done;
+
+struct rte_rcu_qsbr t[RTE_RCU_MAX_LCORE];
+struct rte_hash *h[RTE_RCU_MAX_LCORE];
+char hash_name[RTE_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > RTE_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", RTE_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_register_thread: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_register_thread(void)
+{
+ printf("\nTest rte_rcu_qsbr_register_thread()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_unregister_thread: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_unregister_thread(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, RTE_RCU_MAX_THREADS, 1};
+
+ printf("\nTest rte_rcu_qsbr_unregister_thread()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+
+ /* Find first disabled core */
+ for (i = 0; i < RTE_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ rte_rcu_qsbr_unregister_thread(&t[0], i);
+
+ /* Test with enabled lcore */
+ rte_rcu_qsbr_unregister_thread(&t[0], enabled_core_ids[0]);
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to RTE_RCU_MAX_THREADS
+ * 3 - thread_id = RTE_RCU_MAX_THREADS - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(&t[0]);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_register_thread(&t[0],
+ (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == 72)
+ continue;
+ rte_rcu_qsbr_update(&t[0],
+ (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_update(&t[0], 72);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_unregister_thread(&t[0],
+ (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(&t[0], token, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Trigger the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(&t[0], 0, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(&t[0], token, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 2), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_unregister_thread(&t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(&t[0], token, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(&t[1]);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, &t[0]);
+
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_register_thread(&t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, &t[0]);
+ rte_rcu_qsbr_dump(stdout, &t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = &t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_register_thread(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, lcore_id);
+ rte_rcu_qsbr_unregister_thread(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = &t[(writer_type/2) % RTE_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % RTE_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /*
+ * Start the quiescent state query process
+ * Note: Expected Quiescent states kept greater than 1 for test only
+ */
+ rte_rcu_qsbr_start(temp, writer_type + 1, &token);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("\nTest: 1 writer, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n");
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(&t[0]);
+
+ /* Register worker threads on 4 cores */
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ rte_rcu_qsbr_start(&t[0], num_1qs, &token);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+ /* Register worker threads on 4 cores */
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ rte_rcu_qsbr_start(&t[0], num_1qs, &token);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ /* Register worker threads on 4 cores */
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ rte_rcu_qsbr_start(&t[0], num_1qs, &token[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /*
+ * Start the quiescent state query process
+ * Note: num_2qs kept greater than 1 for test only
+ */
+ rte_rcu_qsbr_start(&t[0], num_2qs, &token[1]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /*
+ * Start the quiescent state query process
+ * Note: num_3qs kept greater than 1 for test only
+ */
+ rte_rcu_qsbr_start(&t[0], num_3qs, &token[2]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, Simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(&t[i]);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Register worker threads on 2 cores */
+ for (i = 0; i < test_cores / 2; i += 2) {
+ rte_rcu_qsbr_register_thread(&t[i / 2], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(&t[i / 2],
+ enabled_core_ids[i + 1]);
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < RTE_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_register_thread() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_unregister_thread() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ /* Functional test cases */
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2018-12-23 7:30 ` Stephen Hemminger
2018-12-23 16:25 ` Paul E. McKenney
0 siblings, 1 reply; 260+ messages in thread
From: Stephen Hemminger @ 2018-12-23 7:30 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, paulmck, gavin.hu, dharmik.thakkar, nd,
Malvika Gupta
On Fri, 21 Dec 2018 20:14:20 -0600
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> From: Dharmik Thakkar <dharmik.thakkar@arm.com>
>
> Add API positive/negative test cases and functional tests.
>
> Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Just a thought, could you build stress tests like the kernel RCU tests?
One worry is that RCU does not play well with blocking threads (and sometimes preemption).
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests
2018-12-23 7:30 ` Stephen Hemminger
@ 2018-12-23 16:25 ` Paul E. McKenney
2019-01-18 7:04 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Paul E. McKenney @ 2018-12-23 16:25 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Honnappa Nagarahalli, dev, konstantin.ananyev, gavin.hu,
dharmik.thakkar, nd, Malvika Gupta
On Sat, Dec 22, 2018 at 11:30:51PM -0800, Stephen Hemminger wrote:
> On Fri, 21 Dec 2018 20:14:20 -0600
> Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
>
> > From: Dharmik Thakkar <dharmik.thakkar@arm.com>
> >
> > Add API positive/negative test cases and functional tests.
> >
> > Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
>
> Just a thought, could you build stress tests like the kernel RCU tests?
> One worry is that RCU does not play well with blocking threads (and sometimes preemption).
There are similar tests in the userspace RCU library, as well, which
can be found at http://liburcu.
Thanx, Paul
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 1/2] " Honnappa Nagarahalli
@ 2019-01-15 11:39 ` Ananyev, Konstantin
2019-01-15 20:43 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-01-15 11:39 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev, stephen, paulmck; +Cc: gavin.hu, dharmik.thakkar, nd
Hi Honnappa,
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> ---
...
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..c818e77fd
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,321 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to shared memory.
> + * A critical section for a data structure can be a quiescent state for
> + * another data structure.
> + *
> + * This library provides the ability to identify quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +
> +/**< Maximum number of reader threads supported. */
> +#define RTE_RCU_MAX_THREADS 128
> +
> +#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
> +#error RTE_RCU_MAX_THREADS must be a power of 2
> +#endif
> +
> +/**< Number of array elements required for the bit-map */
> +#define RTE_QSBR_BIT_MAP_ELEMS (RTE_RCU_MAX_THREADS/(sizeof(uint64_t) * 8))
> +
> +/* Thread IDs are stored as a bitmap of 64b element array. Given thread id
> + * needs to be converted to index into the array and the id within
> + * the array element.
> + */
> +#define RTE_QSBR_THR_INDEX_SHIFT 6
> +#define RTE_QSBR_THR_ID_MASK 0x3f
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt; /**< Quiescent state counter. */
> +} __rte_cache_aligned;
> +
> +/**
> + * RTE thread Quiescent State structure.
> + */
> +struct rte_rcu_qsbr {
> + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS] __rte_cache_aligned;
> + /**< Registered reader thread IDs - reader threads reporting
> + * on this QS variable represented in a bit map.
> + */
> +
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple simultaneous QS queries */
> +
> + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS] __rte_cache_aligned;
> + /**< QS counter for each reader thread, counts upto
> + * current value of token.
As I understand you decided to stick with neutral thread_id and let user define
what exactly thread_id is (lcore, syste, thread id, something else)?
If so, can you probably get rid of RTE_RCU_MAX_THREADS limitation?
I.E. struct rte_rcu_qsbr_cnt w[] and allow user at init time to define
max number of threads allowed.
Or something like:
#define RTE_RCU_QSBR_DEF(name, max_thread) struct name { \
uint64_t reg_thread_id[ALIGN_CEIL(max_thread, 64) >> 6]; \
...
struct rte_rcu_qsbr_cnt w[max_thread]; \
}
> + */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + *
> + */
> +void __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a reader thread, to the list of threads reporting their quiescent
> + * state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_update. This can be called
> + * during initialization or as part of the packet processing loop.
> + * Any ongoing QS queries may wait for the status from this registered
> + * thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_register_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id;
> +
> + RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
> +
> + id = thread_id & RTE_QSBR_THR_ID_MASK;
> + i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
> +
> + /* Worker thread has to count the quiescent states
> + * only from the current value of token.
> + * __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&v->w[thread_id].cnt,
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE),
> + __ATOMIC_RELAXED);
> +
> + /* Release the store to initial TQS count so that readers
> + * can use it immediately after this function returns.
> + */
> + __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing QS queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_unregister_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id;
> +
> + RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
> +
> + id = thread_id & RTE_QSBR_THR_ID_MASK;
> + i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
> +
> + /* Make sure the removal of the thread from the list of
> + * reporting threads is visible before the thread
> + * does anything else.
> + */
> + __atomic_fetch_and(&v->reg_thread_id[i],
> + ~(1UL << id), __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Trigger the reader threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from worker threads.
> + *
> + * @param v
> + * TQS variable
> + * @param n
> + * Expected number of times the quiescent state is entered
> + * @param t
> + * - If successful, this is the token for this call of the API.
> + * This should be passed to rte_rcu_qsbr_check API.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v, unsigned int n, uint64_t *t)
> +{
> + RTE_ASSERT(v == NULL || t == NULL);
> +
> + /* This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for a reader thread.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the reader threads registered to report their quiescent state
> + * on the QS variable must call this API.
> + *
> + * @param v
> + * QS variable
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
> +
> + /* Load the token before the reader thread loads any other
> + * (lock-free) data structure. This ensures that updates
> + * to the data structures are visible if the update
> + * to token is visible.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> +
> + /* Relaxed load/store on the counter is enough as we are
> + * reporting an already completed quiescent state.
> + * __atomic_load_n(cnt, __ATOMIC_RELAXED) is used as 'cnt' (64b)
> + * is accessed atomically.
> + * Copy the current token value. This will end grace period
> + * of multiple concurrent writers.
> + */
> + if (__atomic_load_n(&v->w[thread_id].cnt, __ATOMIC_RELAXED) != t)
> + __atomic_store_n(&v->w[thread_id].cnt, t, __ATOMIC_RELAXED);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the reader threads have entered the quiescent state
> + * 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * @param v
> + * QS variable
> + * @param t
> + * Token returned by rte_rcu_qsbr_start API
> + * @param wait
> + * If true, block till all the reader threads have completed entering
> + * the quiescent state 'n' number of times
> + * @return
> + * - 0 if all reader threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all reader threads have passed through specified number
> + * of quiescent states.
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i, j, id;
> + uint64_t bmap;
> +
> + RTE_ASSERT(v == NULL);
> +
> + i = 0;
> + do {
> + /* Load the current registered thread bit map before
> + * loading the reader thread quiescent state counters.
> + */
> + bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THR_INDEX_SHIFT;
> +
> + while (bmap) {
> + j = __builtin_ctzl(bmap);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED)
> + * is used to ensure 'cnt' (64b) is accessed
> + * atomically.
> + */
> + if (unlikely(__atomic_load_n(&v->w[id + j].cnt,
> + __ATOMIC_RELAXED) < t)) {
> + /* This thread is not in QS */
> + if (!wait)
> + return 0;
> +
> + /* Loop till this thread enters QS */
> + rte_pause();
> + continue;
Shouldn't you re-read reg_thread_id[i] here?
Konstantin
> + }
> +
> + bmap &= ~(1UL << j);
> + }
> +
> + i++;
> + } while (i < RTE_QSBR_BIT_MAP_ELEMS);
> +
> + return 1;
> +}
> +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2019-01-15 11:39 ` Ananyev, Konstantin
@ 2019-01-15 20:43 ` Honnappa Nagarahalli
2019-01-16 15:56 ` Ananyev, Konstantin
0 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-01-15 20:43 UTC (permalink / raw)
To: Ananyev, Konstantin, dev, stephen, paulmck
Cc: Gavin Hu (Arm Technology China),
Dharmik Thakkar, nd, Honnappa Nagarahalli, nd
> Hi Honnappa,
>
>
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > ---
> ...
>
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > 000000000..c818e77fd
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > @@ -0,0 +1,321 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_RCU_QSBR_H_
> > +#define _RTE_RCU_QSBR_H_
> > +
> > +/**
> > + * @file
> > + * RTE Quiescent State Based Reclamation (QSBR)
> > + *
> > + * Quiescent State (QS) is any point in the thread execution
> > + * where the thread does not hold a reference to shared memory.
> > + * A critical section for a data structure can be a quiescent state
> > +for
> > + * another data structure.
> > + *
> > + * This library provides the ability to identify quiescent state.
> > + */
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#include <stdio.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +#include <rte_common.h>
> > +#include <rte_memory.h>
> > +#include <rte_lcore.h>
> > +#include <rte_debug.h>
> > +
> > +/**< Maximum number of reader threads supported. */ #define
> > +RTE_RCU_MAX_THREADS 128
> > +
> > +#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
> > +#error RTE_RCU_MAX_THREADS must be a power of 2 #endif
> > +
> > +/**< Number of array elements required for the bit-map */ #define
> > +RTE_QSBR_BIT_MAP_ELEMS (RTE_RCU_MAX_THREADS/(sizeof(uint64_t)
> * 8))
> > +
> > +/* Thread IDs are stored as a bitmap of 64b element array. Given
> > +thread id
> > + * needs to be converted to index into the array and the id within
> > + * the array element.
> > + */
> > +#define RTE_QSBR_THR_INDEX_SHIFT 6
> > +#define RTE_QSBR_THR_ID_MASK 0x3f
> > +
> > +/* Worker thread counter */
> > +struct rte_rcu_qsbr_cnt {
> > + uint64_t cnt; /**< Quiescent state counter. */ }
> > +__rte_cache_aligned;
> > +
> > +/**
> > + * RTE thread Quiescent State structure.
> > + */
> > +struct rte_rcu_qsbr {
> > + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS]
> __rte_cache_aligned;
> > + /**< Registered reader thread IDs - reader threads reporting
> > + * on this QS variable represented in a bit map.
> > + */
> > +
> > + uint64_t token __rte_cache_aligned;
> > + /**< Counter to allow for multiple simultaneous QS queries */
> > +
> > + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS]
> __rte_cache_aligned;
> > + /**< QS counter for each reader thread, counts upto
> > + * current value of token.
>
> As I understand you decided to stick with neutral thread_id and let user
> define what exactly thread_id is (lcore, syste, thread id, something else)?
Yes, that is correct. I will reply to the other thread to continue the discussion.
> If so, can you probably get rid of RTE_RCU_MAX_THREADS limitation?
I am not seeing this as a limitation. The user can change this if required. May be I should change it as follows:
#ifndef RTE_RCU_MAX_THREADS
#define RTE_RCU_MAX_THREADS 128
#endif
> I.E. struct rte_rcu_qsbr_cnt w[] and allow user at init time to define max
> number of threads allowed.
> Or something like:
> #define RTE_RCU_QSBR_DEF(name, max_thread) struct name { \
> uint64_t reg_thread_id[ALIGN_CEIL(max_thread, 64) >> 6]; \
> ...
> struct rte_rcu_qsbr_cnt w[max_thread]; \ }
I am trying to understand this. I am not following why 'name' is required? Would the user call 'RTE_RCU_QSBR_DEF' in the application header file?
>
>
> > + */
> > +} __rte_cache_aligned;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Initialize a Quiescent State (QS) variable.
> > + *
> > + * @param v
> > + * QS variable
> > + *
> > + */
> > +void __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a reader thread, to the list of threads reporting their
> > +quiescent
> > + * state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + * Any reader thread that wants to report its quiescent state must
> > + * call this API before calling rte_rcu_qsbr_update. This can be
> > +called
> > + * during initialization or as part of the packet processing loop.
> > + * Any ongoing QS queries may wait for the status from this
> > +registered
> > + * thread.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_register_thread(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
> > +
> > + id = thread_id & RTE_QSBR_THR_ID_MASK;
> > + i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
> > +
> > + /* Worker thread has to count the quiescent states
> > + * only from the current value of token.
> > + * __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > + * 'cnt' (64b) is accessed atomically.
> > + */
> > + __atomic_store_n(&v->w[thread_id].cnt,
> > + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE),
> > + __ATOMIC_RELAXED);
> > +
> > + /* Release the store to initial TQS count so that readers
> > + * can use it immediately after this function returns.
> > + */
> > + __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id,
> > +__ATOMIC_RELEASE); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a reader thread, from the list of threads reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * This API can be called from the reader threads during shutdown.
> > + * Ongoing QS queries will stop waiting for the status from this
> > + * unregistered reader thread.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will stop reporting its quiescent
> > + * state on the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_unregister_thread(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
> > +
> > + id = thread_id & RTE_QSBR_THR_ID_MASK;
> > + i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
> > +
> > + /* Make sure the removal of the thread from the list of
> > + * reporting threads is visible before the thread
> > + * does anything else.
> > + */
> > + __atomic_fetch_and(&v->reg_thread_id[i],
> > + ~(1UL << id), __ATOMIC_RELEASE);
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Trigger the reader threads to report the quiescent state
> > + * status.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from worker threads.
> > + *
> > + * @param v
> > + * TQS variable
> > + * @param n
> > + * Expected number of times the quiescent state is entered
> > + * @param t
> > + * - If successful, this is the token for this call of the API.
> > + * This should be passed to rte_rcu_qsbr_check API.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v, unsigned int n, uint64_t
> > +*t) {
> > + RTE_ASSERT(v == NULL || t == NULL);
> > +
> > + /* This store release will ensure that changes to any data
> > + * structure are visible to the workers before the token
> > + * update is visible.
> > + */
> > + *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Update quiescent state for a reader thread.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * All the reader threads registered to report their quiescent state
> > + * on the QS variable must call this API.
> > + *
> > + * @param v
> > + * QS variable
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
> > +
> > + /* Load the token before the reader thread loads any other
> > + * (lock-free) data structure. This ensures that updates
> > + * to the data structures are visible if the update
> > + * to token is visible.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> > +
> > + /* Relaxed load/store on the counter is enough as we are
> > + * reporting an already completed quiescent state.
> > + * __atomic_load_n(cnt, __ATOMIC_RELAXED) is used as 'cnt' (64b)
> > + * is accessed atomically.
> > + * Copy the current token value. This will end grace period
> > + * of multiple concurrent writers.
> > + */
> > + if (__atomic_load_n(&v->w[thread_id].cnt, __ATOMIC_RELAXED) != t)
> > + __atomic_store_n(&v->w[thread_id].cnt, t,
> __ATOMIC_RELAXED); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Checks if all the reader threads have entered the quiescent state
> > + * 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from the worker threads as well.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param t
> > + * Token returned by rte_rcu_qsbr_start API
> > + * @param wait
> > + * If true, block till all the reader threads have completed entering
> > + * the quiescent state 'n' number of times
> > + * @return
> > + * - 0 if all reader threads have NOT passed through specified number
> > + * of quiescent states.
> > + * - 1 if all reader threads have passed through specified number
> > + * of quiescent states.
> > + */
> > +static __rte_always_inline int __rte_experimental
> > +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait) {
> > + uint32_t i, j, id;
> > + uint64_t bmap;
> > +
> > + RTE_ASSERT(v == NULL);
> > +
> > + i = 0;
> > + do {
> > + /* Load the current registered thread bit map before
> > + * loading the reader thread quiescent state counters.
> > + */
> > + bmap = __atomic_load_n(&v->reg_thread_id[i],
> __ATOMIC_ACQUIRE);
> > + id = i << RTE_QSBR_THR_INDEX_SHIFT;
> > +
> > + while (bmap) {
> > + j = __builtin_ctzl(bmap);
> > +
> > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED)
> > + * is used to ensure 'cnt' (64b) is accessed
> > + * atomically.
> > + */
> > + if (unlikely(__atomic_load_n(&v->w[id + j].cnt,
> > + __ATOMIC_RELAXED) < t)) {
> > + /* This thread is not in QS */
> > + if (!wait)
> > + return 0;
> > +
> > + /* Loop till this thread enters QS */
> > + rte_pause();
> > + continue;
> Shouldn't you re-read reg_thread_id[i] here?
> Konstantin
Yes, you are right. I will try to add a test case as well to address this.
> > + }
> > +
> > + bmap &= ~(1UL << j);
> > + }
> > +
> > + i++;
> > + } while (i < RTE_QSBR_BIT_MAP_ELEMS);
> > +
> > + return 1;
> > +}
> > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2019-01-15 20:43 ` Honnappa Nagarahalli
@ 2019-01-16 15:56 ` Ananyev, Konstantin
2019-01-18 6:48 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-01-16 15:56 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev, stephen, paulmck
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, nd, nd
> > ...
> >
> > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > 000000000..c818e77fd
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > @@ -0,0 +1,321 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + * Copyright (c) 2018 Arm Limited
> > > + */
> > > +
> > > +#ifndef _RTE_RCU_QSBR_H_
> > > +#define _RTE_RCU_QSBR_H_
> > > +
> > > +/**
> > > + * @file
> > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > + *
> > > + * Quiescent State (QS) is any point in the thread execution
> > > + * where the thread does not hold a reference to shared memory.
> > > + * A critical section for a data structure can be a quiescent state
> > > +for
> > > + * another data structure.
> > > + *
> > > + * This library provides the ability to identify quiescent state.
> > > + */
> > > +
> > > +#ifdef __cplusplus
> > > +extern "C" {
> > > +#endif
> > > +
> > > +#include <stdio.h>
> > > +#include <stdint.h>
> > > +#include <errno.h>
> > > +#include <rte_common.h>
> > > +#include <rte_memory.h>
> > > +#include <rte_lcore.h>
> > > +#include <rte_debug.h>
> > > +
> > > +/**< Maximum number of reader threads supported. */ #define
> > > +RTE_RCU_MAX_THREADS 128
> > > +
> > > +#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
> > > +#error RTE_RCU_MAX_THREADS must be a power of 2 #endif
> > > +
> > > +/**< Number of array elements required for the bit-map */ #define
> > > +RTE_QSBR_BIT_MAP_ELEMS (RTE_RCU_MAX_THREADS/(sizeof(uint64_t)
> > * 8))
> > > +
> > > +/* Thread IDs are stored as a bitmap of 64b element array. Given
> > > +thread id
> > > + * needs to be converted to index into the array and the id within
> > > + * the array element.
> > > + */
> > > +#define RTE_QSBR_THR_INDEX_SHIFT 6
> > > +#define RTE_QSBR_THR_ID_MASK 0x3f
> > > +
> > > +/* Worker thread counter */
> > > +struct rte_rcu_qsbr_cnt {
> > > + uint64_t cnt; /**< Quiescent state counter. */ }
> > > +__rte_cache_aligned;
> > > +
> > > +/**
> > > + * RTE thread Quiescent State structure.
> > > + */
> > > +struct rte_rcu_qsbr {
> > > + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS]
> > __rte_cache_aligned;
> > > + /**< Registered reader thread IDs - reader threads reporting
> > > + * on this QS variable represented in a bit map.
> > > + */
> > > +
> > > + uint64_t token __rte_cache_aligned;
> > > + /**< Counter to allow for multiple simultaneous QS queries */
> > > +
> > > + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS]
> > __rte_cache_aligned;
> > > + /**< QS counter for each reader thread, counts upto
> > > + * current value of token.
> >
> > As I understand you decided to stick with neutral thread_id and let user
> > define what exactly thread_id is (lcore, syste, thread id, something else)?
> Yes, that is correct. I will reply to the other thread to continue the discussion.
>
> > If so, can you probably get rid of RTE_RCU_MAX_THREADS limitation?
> I am not seeing this as a limitation. The user can change this if required. May be I should change it as follows:
> #ifndef RTE_RCU_MAX_THREADS
> #define RTE_RCU_MAX_THREADS 128
> #endif
Yep, that's better, though it would still require user to rebuild the code
if he would like to increase total number of threads supported.
Though it seems relatively simply to extend current code to support
dynamic max thread num here (2 variable arrays plus shift value plus mask).
>
> > I.E. struct rte_rcu_qsbr_cnt w[] and allow user at init time to define max
> > number of threads allowed.
> > Or something like:
> > #define RTE_RCU_QSBR_DEF(name, max_thread) struct name { \
> > uint64_t reg_thread_id[ALIGN_CEIL(max_thread, 64) >> 6]; \
> > ...
> > struct rte_rcu_qsbr_cnt w[max_thread]; \ }
> I am trying to understand this. I am not following why 'name' is required? Would the user call 'RTE_RCU_QSBR_DEF' in the application
> header file?
My thought here was to allow user to define his own structures,
depending on the number of max threads he needs/wants:
RTE_RCU_QSBR_DEF(rte_rcu_qsbr_128, 128);
RTE_RCU_QSBR_DEF(rte_rcu_qsbr_64, 64);
...
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2019-01-16 15:56 ` Ananyev, Konstantin
@ 2019-01-18 6:48 ` Honnappa Nagarahalli
2019-01-18 12:14 ` Ananyev, Konstantin
0 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-01-18 6:48 UTC (permalink / raw)
To: Ananyev, Konstantin, dev, stephen, paulmck
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, nd, nd
>
> > > ...
> > >
> > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > 000000000..c818e77fd
> > > > --- /dev/null
> > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > @@ -0,0 +1,321 @@
> > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > + * Copyright (c) 2018 Arm Limited */
> > > > +
> > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > +#define _RTE_RCU_QSBR_H_
> > > > +
> > > > +/**
> > > > + * @file
> > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > + *
> > > > + * Quiescent State (QS) is any point in the thread execution
> > > > + * where the thread does not hold a reference to shared memory.
> > > > + * A critical section for a data structure can be a quiescent
> > > > +state for
> > > > + * another data structure.
> > > > + *
> > > > + * This library provides the ability to identify quiescent state.
> > > > + */
> > > > +
> > > > +#ifdef __cplusplus
> > > > +extern "C" {
> > > > +#endif
> > > > +
> > > > +#include <stdio.h>
> > > > +#include <stdint.h>
> > > > +#include <errno.h>
> > > > +#include <rte_common.h>
> > > > +#include <rte_memory.h>
> > > > +#include <rte_lcore.h>
> > > > +#include <rte_debug.h>
> > > > +
> > > > +/**< Maximum number of reader threads supported. */ #define
> > > > +RTE_RCU_MAX_THREADS 128
> > > > +
> > > > +#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
> > > > +#error RTE_RCU_MAX_THREADS must be a power of 2 #endif
> > > > +
> > > > +/**< Number of array elements required for the bit-map */ #define
> > > > +RTE_QSBR_BIT_MAP_ELEMS
> (RTE_RCU_MAX_THREADS/(sizeof(uint64_t)
> > > * 8))
> > > > +
> > > > +/* Thread IDs are stored as a bitmap of 64b element array. Given
> > > > +thread id
> > > > + * needs to be converted to index into the array and the id
> > > > +within
> > > > + * the array element.
> > > > + */
> > > > +#define RTE_QSBR_THR_INDEX_SHIFT 6 #define
> RTE_QSBR_THR_ID_MASK
> > > > +0x3f
> > > > +
> > > > +/* Worker thread counter */
> > > > +struct rte_rcu_qsbr_cnt {
> > > > + uint64_t cnt; /**< Quiescent state counter. */ }
> > > > +__rte_cache_aligned;
> > > > +
> > > > +/**
> > > > + * RTE thread Quiescent State structure.
> > > > + */
> > > > +struct rte_rcu_qsbr {
> > > > + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS]
> > > __rte_cache_aligned;
> > > > + /**< Registered reader thread IDs - reader threads reporting
> > > > + * on this QS variable represented in a bit map.
> > > > + */
> > > > +
> > > > + uint64_t token __rte_cache_aligned;
> > > > + /**< Counter to allow for multiple simultaneous QS queries */
> > > > +
> > > > + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS]
> > > __rte_cache_aligned;
> > > > + /**< QS counter for each reader thread, counts upto
> > > > + * current value of token.
> > >
> > > As I understand you decided to stick with neutral thread_id and let
> > > user define what exactly thread_id is (lcore, syste, thread id, something
> else)?
> > Yes, that is correct. I will reply to the other thread to continue the discussion.
> >
> > > If so, can you probably get rid of RTE_RCU_MAX_THREADS limitation?
> > I am not seeing this as a limitation. The user can change this if required. May
> be I should change it as follows:
> > #ifndef RTE_RCU_MAX_THREADS
> > #define RTE_RCU_MAX_THREADS 128
> > #endif
>
> Yep, that's better, though it would still require user to rebuild the code if he
> would like to increase total number of threads supported.
Agree
> Though it seems relatively simply to extend current code to support dynamic
> max thread num here (2 variable arrays plus shift value plus mask).
Agree, supporting dynamic 'max thread num' is simple. But this means memory needs to be allocated to the arrays. The API 'rte_rcu_qsbr_init' has to take max thread num as the parameter. We also have to introduce another API to free this memory. This will become very similar to alloc/free APIs I had in the v1.
I hope I am following you well, please correct me if not.
>
> >
> > > I.E. struct rte_rcu_qsbr_cnt w[] and allow user at init time to
> > > define max number of threads allowed.
> > > Or something like:
> > > #define RTE_RCU_QSBR_DEF(name, max_thread) struct name { \
> > > uint64_t reg_thread_id[ALIGN_CEIL(max_thread, 64) >> 6]; \
> > > ...
> > > struct rte_rcu_qsbr_cnt w[max_thread]; \ }
> > I am trying to understand this. I am not following why 'name' is
> > required? Would the user call 'RTE_RCU_QSBR_DEF' in the application
> header file?
>
> My thought here was to allow user to define his own structures, depending on
> the number of max threads he needs/wants:
> RTE_RCU_QSBR_DEF(rte_rcu_qsbr_128, 128);
> RTE_RCU_QSBR_DEF(rte_rcu_qsbr_64, 64); ...
Thank you for the clarification, I follow you now. However, it will not solve the problem of dynamic max thread num. Changes to the max number of threads will require recompilation.
> Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests
2018-12-23 16:25 ` Paul E. McKenney
@ 2019-01-18 7:04 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-01-18 7:04 UTC (permalink / raw)
To: paulmck, Stephen Hemminger
Cc: dev, konstantin.ananyev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, nd, Malvika Gupta, Honnappa Nagarahalli, nd
>
> On Sat, Dec 22, 2018 at 11:30:51PM -0800, Stephen Hemminger wrote:
> > On Fri, 21 Dec 2018 20:14:20 -0600
> > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> >
> > > From: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > >
> > > Add API positive/negative test cases and functional tests.
> > >
> > > Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> > > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> >
> > Just a thought, could you build stress tests like the kernel RCU tests?
> > One worry is that RCU does not play well with blocking threads (and
> sometimes preemption).
Handling blocking threads is supported right now through register_thread/unregister_thread APIs. If a thread has to make a call to a blocking API, it is expected to unregister itself first. It will be improved further in V3.
However, I am not sure what needs to be done for preemption. I would imagine that the threads will be scheduled back at some point (depending on the scheduling policy). If they were using the data structure the updater has to wait.
>
> There are similar tests in the userspace RCU library, as well, which can be
> found at http://liburcu.
I looked at these tests. There is perftest/rperftest(reader only)/uperftest(updater only)/stresstest/benchmark. Currently, we have covered perftest/stresstest/benchmark pretty well (perf numbers need to be added). We will add rperftest and uperftest.
>
> Thanx, Paul
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2019-01-18 6:48 ` Honnappa Nagarahalli
@ 2019-01-18 12:14 ` Ananyev, Konstantin
2019-01-24 17:15 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-01-18 12:14 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev, stephen, paulmck
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, nd, nd
> > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > > 000000000..c818e77fd
> > > > > --- /dev/null
> > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > @@ -0,0 +1,321 @@
> > > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > > + * Copyright (c) 2018 Arm Limited */
> > > > > +
> > > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > > +#define _RTE_RCU_QSBR_H_
> > > > > +
> > > > > +/**
> > > > > + * @file
> > > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > > + *
> > > > > + * Quiescent State (QS) is any point in the thread execution
> > > > > + * where the thread does not hold a reference to shared memory.
> > > > > + * A critical section for a data structure can be a quiescent
> > > > > +state for
> > > > > + * another data structure.
> > > > > + *
> > > > > + * This library provides the ability to identify quiescent state.
> > > > > + */
> > > > > +
> > > > > +#ifdef __cplusplus
> > > > > +extern "C" {
> > > > > +#endif
> > > > > +
> > > > > +#include <stdio.h>
> > > > > +#include <stdint.h>
> > > > > +#include <errno.h>
> > > > > +#include <rte_common.h>
> > > > > +#include <rte_memory.h>
> > > > > +#include <rte_lcore.h>
> > > > > +#include <rte_debug.h>
> > > > > +
> > > > > +/**< Maximum number of reader threads supported. */ #define
> > > > > +RTE_RCU_MAX_THREADS 128
> > > > > +
> > > > > +#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
> > > > > +#error RTE_RCU_MAX_THREADS must be a power of 2 #endif
> > > > > +
> > > > > +/**< Number of array elements required for the bit-map */ #define
> > > > > +RTE_QSBR_BIT_MAP_ELEMS
> > (RTE_RCU_MAX_THREADS/(sizeof(uint64_t)
> > > > * 8))
> > > > > +
> > > > > +/* Thread IDs are stored as a bitmap of 64b element array. Given
> > > > > +thread id
> > > > > + * needs to be converted to index into the array and the id
> > > > > +within
> > > > > + * the array element.
> > > > > + */
> > > > > +#define RTE_QSBR_THR_INDEX_SHIFT 6 #define
> > RTE_QSBR_THR_ID_MASK
> > > > > +0x3f
> > > > > +
> > > > > +/* Worker thread counter */
> > > > > +struct rte_rcu_qsbr_cnt {
> > > > > + uint64_t cnt; /**< Quiescent state counter. */ }
> > > > > +__rte_cache_aligned;
> > > > > +
> > > > > +/**
> > > > > + * RTE thread Quiescent State structure.
> > > > > + */
> > > > > +struct rte_rcu_qsbr {
> > > > > + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS]
> > > > __rte_cache_aligned;
> > > > > + /**< Registered reader thread IDs - reader threads reporting
> > > > > + * on this QS variable represented in a bit map.
> > > > > + */
> > > > > +
> > > > > + uint64_t token __rte_cache_aligned;
> > > > > + /**< Counter to allow for multiple simultaneous QS queries */
> > > > > +
> > > > > + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS]
> > > > __rte_cache_aligned;
> > > > > + /**< QS counter for each reader thread, counts upto
> > > > > + * current value of token.
> > > >
> > > > As I understand you decided to stick with neutral thread_id and let
> > > > user define what exactly thread_id is (lcore, syste, thread id, something
> > else)?
> > > Yes, that is correct. I will reply to the other thread to continue the discussion.
> > >
> > > > If so, can you probably get rid of RTE_RCU_MAX_THREADS limitation?
> > > I am not seeing this as a limitation. The user can change this if required. May
> > be I should change it as follows:
> > > #ifndef RTE_RCU_MAX_THREADS
> > > #define RTE_RCU_MAX_THREADS 128
> > > #endif
> >
> > Yep, that's better, though it would still require user to rebuild the code if he
> > would like to increase total number of threads supported.
> Agree
>
> > Though it seems relatively simply to extend current code to support dynamic
> > max thread num here (2 variable arrays plus shift value plus mask).
> Agree, supporting dynamic 'max thread num' is simple. But this means memory needs to be allocated to the arrays. The API
> 'rte_rcu_qsbr_init' has to take max thread num as the parameter. We also have to introduce another API to free this memory. This will
> become very similar to alloc/free APIs I had in the v1.
> I hope I am following you well, please correct me if not.
I think we can still leave alloc/free tasks to the user.
We probabply just need extra function rte_rcu_qsbr_size(uint32_ max_threads)
to help user calculate required size.
rte_rcu_qsbr_init() might take as an additional parameter 'size' to make checks.
Thought about something like that:
size_t sz = rte_rcu_qsbr_size(max_threads);
struct rte_rcu_qsbr *qsbr = alloc_aligned(CACHE_LINE, sz);
rte_rcu_qsbr_init(qsbr, max_threads, sz);
...
Konstantin
>
> >
> > >
> > > > I.E. struct rte_rcu_qsbr_cnt w[] and allow user at init time to
> > > > define max number of threads allowed.
> > > > Or something like:
> > > > #define RTE_RCU_QSBR_DEF(name, max_thread) struct name { \
> > > > uint64_t reg_thread_id[ALIGN_CEIL(max_thread, 64) >> 6]; \
> > > > ...
> > > > struct rte_rcu_qsbr_cnt w[max_thread]; \ }
> > > I am trying to understand this. I am not following why 'name' is
> > > required? Would the user call 'RTE_RCU_QSBR_DEF' in the application
> > header file?
> >
> > My thought here was to allow user to define his own structures, depending on
> > the number of max threads he needs/wants:
> > RTE_RCU_QSBR_DEF(rte_rcu_qsbr_128, 128);
> > RTE_RCU_QSBR_DEF(rte_rcu_qsbr_64, 64); ...
> Thank you for the clarification, I follow you now. However, it will not solve the problem of dynamic max thread num. Changes to the max
> number of threads will require recompilation.
>
> > Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2019-01-18 12:14 ` Ananyev, Konstantin
@ 2019-01-24 17:15 ` Honnappa Nagarahalli
2019-01-24 18:05 ` Ananyev, Konstantin
0 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-01-24 17:15 UTC (permalink / raw)
To: Ananyev, Konstantin, dev, stephen, paulmck
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, nd, nd
<snip>
> > > > > > +/**
> > > > > > + * RTE thread Quiescent State structure.
> > > > > > + */
> > > > > > +struct rte_rcu_qsbr {
> > > > > > + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS]
> > > > > __rte_cache_aligned;
> > > > > > + /**< Registered reader thread IDs - reader threads reporting
> > > > > > + * on this QS variable represented in a bit map.
> > > > > > + */
> > > > > > +
> > > > > > + uint64_t token __rte_cache_aligned;
> > > > > > + /**< Counter to allow for multiple simultaneous QS queries
> > > > > > +*/
> > > > > > +
> > > > > > + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS]
> > > > > __rte_cache_aligned;
> > > > > > + /**< QS counter for each reader thread, counts upto
> > > > > > + * current value of token.
> > > > >
> > > > > As I understand you decided to stick with neutral thread_id and
> > > > > let user define what exactly thread_id is (lcore, syste, thread
> > > > > id, something
> > > else)?
> > > > Yes, that is correct. I will reply to the other thread to continue the
> discussion.
> > > >
> > > > > If so, can you probably get rid of RTE_RCU_MAX_THREADS limitation?
> > > > I am not seeing this as a limitation. The user can change this if
> > > > required. May
> > > be I should change it as follows:
> > > > #ifndef RTE_RCU_MAX_THREADS
> > > > #define RTE_RCU_MAX_THREADS 128
> > > > #endif
> > >
> > > Yep, that's better, though it would still require user to rebuild
> > > the code if he would like to increase total number of threads supported.
> > Agree
> >
> > > Though it seems relatively simply to extend current code to support
> > > dynamic max thread num here (2 variable arrays plus shift value plus
> mask).
> > Agree, supporting dynamic 'max thread num' is simple. But this means
> > memory needs to be allocated to the arrays. The API
> > 'rte_rcu_qsbr_init' has to take max thread num as the parameter. We also
> have to introduce another API to free this memory. This will become very
> similar to alloc/free APIs I had in the v1.
> > I hope I am following you well, please correct me if not.
>
> I think we can still leave alloc/free tasks to the user.
> We probabply just need extra function rte_rcu_qsbr_size(uint32_
> max_threads) to help user calculate required size.
> rte_rcu_qsbr_init() might take as an additional parameter 'size' to make
> checks.
The size is returned by an API provided by the library. Why does it need to be validated again? If 'size' is required for rte_rcu_qsbr_init, it could calculate it again.
> Thought about something like that:
>
> size_t sz = rte_rcu_qsbr_size(max_threads); struct rte_rcu_qsbr *qsbr =
> alloc_aligned(CACHE_LINE, sz); rte_rcu_qsbr_init(qsbr, max_threads, sz); ...
>
Do you see any advantage for allowing the user to allocate the memory?
This approach requires the user to call 3 APIs (including memory allocation). These 3 can be abstracted in a rte_rcu_qsbr_alloc API, user has to call just 1 API.
> Konstantin
>
> >
> > >
> > > >
> > > > > I.E. struct rte_rcu_qsbr_cnt w[] and allow user at init time to
> > > > > define max number of threads allowed.
> > > > > Or something like:
> > > > > #define RTE_RCU_QSBR_DEF(name, max_thread) struct name { \
> > > > > uint64_t reg_thread_id[ALIGN_CEIL(max_thread, 64) >> 6]; \
> > > > > ...
> > > > > struct rte_rcu_qsbr_cnt w[max_thread]; \ }
> > > > I am trying to understand this. I am not following why 'name' is
> > > > required? Would the user call 'RTE_RCU_QSBR_DEF' in the
> > > > application
> > > header file?
> > >
> > > My thought here was to allow user to define his own structures,
> > > depending on the number of max threads he needs/wants:
> > > RTE_RCU_QSBR_DEF(rte_rcu_qsbr_128, 128);
> > > RTE_RCU_QSBR_DEF(rte_rcu_qsbr_64, 64); ...
> > Thank you for the clarification, I follow you now. However, it will
> > not solve the problem of dynamic max thread num. Changes to the max
> number of threads will require recompilation.
> >
> > > Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2019-01-24 17:15 ` Honnappa Nagarahalli
@ 2019-01-24 18:05 ` Ananyev, Konstantin
2019-02-22 7:07 ` Honnappa Nagarahalli
0 siblings, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-01-24 18:05 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev, stephen, paulmck
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, nd, nd
> <snip>
>
> > > > > > > +/**
> > > > > > > + * RTE thread Quiescent State structure.
> > > > > > > + */
> > > > > > > +struct rte_rcu_qsbr {
> > > > > > > + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS]
> > > > > > __rte_cache_aligned;
> > > > > > > + /**< Registered reader thread IDs - reader threads reporting
> > > > > > > + * on this QS variable represented in a bit map.
> > > > > > > + */
> > > > > > > +
> > > > > > > + uint64_t token __rte_cache_aligned;
> > > > > > > + /**< Counter to allow for multiple simultaneous QS queries
> > > > > > > +*/
> > > > > > > +
> > > > > > > + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS]
> > > > > > __rte_cache_aligned;
> > > > > > > + /**< QS counter for each reader thread, counts upto
> > > > > > > + * current value of token.
> > > > > >
> > > > > > As I understand you decided to stick with neutral thread_id and
> > > > > > let user define what exactly thread_id is (lcore, syste, thread
> > > > > > id, something
> > > > else)?
> > > > > Yes, that is correct. I will reply to the other thread to continue the
> > discussion.
> > > > >
> > > > > > If so, can you probably get rid of RTE_RCU_MAX_THREADS limitation?
> > > > > I am not seeing this as a limitation. The user can change this if
> > > > > required. May
> > > > be I should change it as follows:
> > > > > #ifndef RTE_RCU_MAX_THREADS
> > > > > #define RTE_RCU_MAX_THREADS 128
> > > > > #endif
> > > >
> > > > Yep, that's better, though it would still require user to rebuild
> > > > the code if he would like to increase total number of threads supported.
> > > Agree
> > >
> > > > Though it seems relatively simply to extend current code to support
> > > > dynamic max thread num here (2 variable arrays plus shift value plus
> > mask).
> > > Agree, supporting dynamic 'max thread num' is simple. But this means
> > > memory needs to be allocated to the arrays. The API
> > > 'rte_rcu_qsbr_init' has to take max thread num as the parameter. We also
> > have to introduce another API to free this memory. This will become very
> > similar to alloc/free APIs I had in the v1.
> > > I hope I am following you well, please correct me if not.
> >
> > I think we can still leave alloc/free tasks to the user.
> > We probabply just need extra function rte_rcu_qsbr_size(uint32_
> > max_threads) to help user calculate required size.
> > rte_rcu_qsbr_init() might take as an additional parameter 'size' to make
> > checks.
> The size is returned by an API provided by the library. Why does it need to be validated again? If 'size' is required for rte_rcu_qsbr_init, it
> could calculate it again.
Just as extra-safety check.
I don't have strong opinion here - if you think it is overkill, let's drop it.
>
> > Thought about something like that:
> >
> > size_t sz = rte_rcu_qsbr_size(max_threads); struct rte_rcu_qsbr *qsbr =
> > alloc_aligned(CACHE_LINE, sz); rte_rcu_qsbr_init(qsbr, max_threads, sz); ...
> >
> Do you see any advantage for allowing the user to allocate the memory?
So user can choose where to allocate the memory (eal malloc, normal malloc, stack, something else).
Again user might decide to make rcu part of some complex data structure -
in that case he probably would like to allocate one big chunk of memory at once and then
provide part of it for rcu.
Or some other usage scenario that I can't predict.
> This approach requires the user to call 3 APIs (including memory allocation). These 3 can be abstracted in a rte_rcu_qsbr_alloc API, user has
> to call just 1 API.
>
> > Konstantin
> >
> > >
> > > >
> > > > >
> > > > > > I.E. struct rte_rcu_qsbr_cnt w[] and allow user at init time to
> > > > > > define max number of threads allowed.
> > > > > > Or something like:
> > > > > > #define RTE_RCU_QSBR_DEF(name, max_thread) struct name { \
> > > > > > uint64_t reg_thread_id[ALIGN_CEIL(max_thread, 64) >> 6]; \
> > > > > > ...
> > > > > > struct rte_rcu_qsbr_cnt w[max_thread]; \ }
> > > > > I am trying to understand this. I am not following why 'name' is
> > > > > required? Would the user call 'RTE_RCU_QSBR_DEF' in the
> > > > > application
> > > > header file?
> > > >
> > > > My thought here was to allow user to define his own structures,
> > > > depending on the number of max threads he needs/wants:
> > > > RTE_RCU_QSBR_DEF(rte_rcu_qsbr_128, 128);
> > > > RTE_RCU_QSBR_DEF(rte_rcu_qsbr_64, 64); ...
> > > Thank you for the clarification, I follow you now. However, it will
> > > not solve the problem of dynamic max thread num. Changes to the max
> > number of threads will require recompilation.
> > >
> > > > Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 0/2] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 1/2] " Honnappa Nagarahalli
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-02-22 7:04 ` Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 1/5] " Honnappa Nagarahalli
` (4 more replies)
2 siblings, 5 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-02-22 7:04 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, malvika.gupta, nd
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures simultaneously. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 acesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencng D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the over head, of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. One has to understand how grace period and critical section
affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no shared
data structures are getting accessed) act as perfect quiescent states. This
will combine all the shared data structure accesses into a single, large
critical section which helps keep the over head on the reader side to
a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. To provide the required flexibility, this
library has a concept of QS variable. The application can create one
QS variable per data structure to help it track the end of grace
period for each data structure.
The application can initialize a QS variable using the API rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
RTE_RCU_MAX_THREADS. The application could also use lcore_id as the
thread ID where applicable.
rte_rcu_qsbr_register_thread API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
However, the application must ensure that the reader thread is ready to
report the QS status before the writer checks the QS.
The application can trigger the reader threads to report their QS
status by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The application has to call rte_rcu_qsbr_check API with the token to get the
current QS status. Option to block till all the reader threads enter the
QS is provided. If this API indicates that all the reader threads have entered
the QS, the application can free the deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as wroker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the QS. This reduces the memory accesses due
to continuous polling for the status.
rte_rcu_qsbr_unregister_thread API will remove a reader thread from reporting
its QS. The rte_rcu_qsbr_check API will not wait for this reader thread to
report the QS status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
3) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Next Steps:
1) Update the cover letter to indicate the addition of rte_rcu_get_memsize
2) rte_rcu_qsbr_register_thread/rte_rcu_qsbr_unregister_thread can be
optimized to avoid accessing the common bitmap array. This is required
as these are data plane APIs. Plan is to introduce
rte_rcu_qsbr_thread_online/rte_rcu_qsbr_thread_offline which will not
touch the common bitmap array.
3) Add debug logs to enable debugging
4) Documentation
5) Convert to patch
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (4):
rcu: add RCU library supporting QSBR mechanism
lib/rcu: add dynamic memory allocation capability
test/rcu_qsbr: modify test cases for dynamic memory allocation
lib/rcu: fix the size of register thread ID array size
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 89 +++
lib/librte_rcu/rte_rcu_qsbr.h | 353 ++++++++++++
lib/librte_rcu/rte_rcu_version.map | 8 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
test/test/Makefile | 2 +
test/test/autotest_data.py | 12 +
test/test/meson.build | 7 +-
test/test/test_rcu_qsbr.c | 858 +++++++++++++++++++++++++++++
test/test/test_rcu_qsbr_perf.c | 275 +++++++++
14 files changed, 1641 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
create mode 100644 test/test/test_rcu_qsbr.c
create mode 100644 test/test/test_rcu_qsbr_perf.c
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v3 1/5] rcu: add RCU library supporting QSBR mechanism
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-02-22 7:04 ` Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 2/5] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
` (3 subsequent siblings)
4 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-02-22 7:04 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, malvika.gupta, nd
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 63 ++++++
lib/librte_rcu/rte_rcu_qsbr.h | 330 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 8 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
9 files changed, 439 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/config/common_base b/config/common_base
index 7c6da5165..af550e96a 100644
--- a/config/common_base
+++ b/config/common_base
@@ -805,6 +805,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index d6239d27c..15b67e210 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..3c2577ee2
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Initialize a quiescent state variable */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v)
+{
+ memset(v, 0, sizeof(struct rte_rcu_qsbr));
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ RTE_ASSERT(v == NULL || f == NULL);
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < RTE_QSBR_BIT_MAP_ELEMS; i++)
+ fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < RTE_QSBR_BIT_MAP_ELEMS; i++) {
+ bmap = __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(&v->w[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..53e00488b
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,330 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to shared memory.
+ * A critical section for a data structure can be a quiescent state for
+ * another data structure.
+ *
+ * This library provides the ability to identify quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+/**< Maximum number of reader threads supported. */
+#define RTE_RCU_MAX_THREADS 128
+
+#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
+#error RTE_RCU_MAX_THREADS must be a power of 2
+#endif
+
+/**< Number of array elements required for the bit-map */
+#define RTE_QSBR_BIT_MAP_ELEMS (RTE_RCU_MAX_THREADS/(sizeof(uint64_t) * 8))
+
+/* Thread IDs are stored as a bitmap of 64b element array. Given thread id
+ * needs to be converted to index into the array and the id within
+ * the array element.
+ */
+#define RTE_QSBR_THR_INDEX_SHIFT 6
+#define RTE_QSBR_THR_ID_MASK 0x3f
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt; /**< Quiescent state counter. */
+} __rte_cache_aligned;
+
+/**
+ * RTE thread Quiescent State structure.
+ */
+struct rte_rcu_qsbr {
+ uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS] __rte_cache_aligned;
+ /**< Registered reader thread IDs - reader threads reporting
+ * on this QS variable represented in a bit map.
+ */
+
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple simultaneous QS queries */
+
+ struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS] __rte_cache_aligned;
+ /**< QS counter for each reader thread, counts upto
+ * current value of token.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ *
+ */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a reader thread, to the list of threads reporting their quiescent
+ * state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_update. This can be called
+ * during initialization or as part of the packet processing loop.
+ * Any ongoing QS queries may wait for the status from this registered
+ * thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_register_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+
+ id = thread_id & RTE_QSBR_THR_ID_MASK;
+ i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
+
+ /* Worker thread has to count the quiescent states
+ * only from the current value of token.
+ * __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->w[thread_id].cnt,
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE),
+ __ATOMIC_RELAXED);
+
+ /* Release the store to initial TQS count so that readers
+ * can use it immediately after this function returns.
+ */
+ __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing QS queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unregister_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+
+ id = thread_id & RTE_QSBR_THR_ID_MASK;
+ i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
+
+ /* Make sure the removal of the thread from the list of
+ * reporting threads is visible before the thread
+ * does anything else.
+ */
+ __atomic_fetch_and(&v->reg_thread_id[i],
+ ~(1UL << id), __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Trigger the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * TQS variable
+ * @param n
+ * Expected number of times the quiescent state is entered
+ * @param t
+ * - If successful, this is the token for this call of the API.
+ * This should be passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v, unsigned int n, uint64_t *t)
+{
+ RTE_ASSERT(v == NULL || t == NULL);
+
+ /* This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ *t = __atomic_add_fetch(&v->token, n, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+
+ /* Load the token before the reader thread loads any other
+ * (lock-free) data structure. This ensures that updates
+ * to the data structures are visible if the update
+ * to token is visible.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Relaxed load/store on the counter is enough as we are
+ * reporting an already completed quiescent state.
+ * __atomic_load_n(cnt, __ATOMIC_RELAXED) is used as 'cnt' (64b)
+ * is accessed atomically.
+ * Copy the current token value. This will end grace period
+ * of multiple concurrent writers.
+ */
+ if (__atomic_load_n(&v->w[thread_id].cnt, __ATOMIC_RELAXED) != t)
+ __atomic_store_n(&v->w[thread_id].cnt, t, __ATOMIC_RELAXED);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state 'n' number of times
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+
+ RTE_ASSERT(v == NULL);
+
+ i = 0;
+ do {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THR_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+
+/* printf ("Status check: token = %lu, wait = %d, Bit Map = 0x%x, Thread ID = %d\n", t, wait, bmap, id+j); */
+ /* __atomic_load_n(cnt, __ATOMIC_RELAXED)
+ * is used to ensure 'cnt' (64b) is accessed
+ * atomically.
+ */
+ if (unlikely(__atomic_load_n(&v->w[id + j].cnt,
+ __ATOMIC_RELAXED) < t)) {
+
+/* printf ("Status not in QS: token = %lu, Wait = %d, Thread QS cnt = %lu, Thread ID = %d\n", t, wait, RTE_QSBR_CNT_ARRAY_ELM(v, id + j)->cnt, id+j); */
+ /* This thread is not in QS */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(
+ &v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+
+ i++;
+ } while (i < RTE_QSBR_BIT_MAP_ELEMS);
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..0df2071be
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,8 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index e8b40f546..a5ffc5dc6 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -21,7 +21,7 @@ libraries = [ 'compat', # just a header, used for versioning
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8a4f0f4e5..a9944d2f5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v3 2/5] test/rcu_qsbr: add API and functional tests
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 1/5] " Honnappa Nagarahalli
@ 2019-02-22 7:04 ` Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 3/5] lib/rcu: add dynamic memory allocation capability Honnappa Nagarahalli
` (2 subsequent siblings)
4 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-02-22 7:04 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, malvika.gupta, nd
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
test/test/Makefile | 2 +
test/test/autotest_data.py | 12 +
test/test/meson.build | 7 +-
test/test/test_rcu_qsbr.c | 831 +++++++++++++++++++++++++++++++++
test/test/test_rcu_qsbr_perf.c | 272 +++++++++++
5 files changed, 1123 insertions(+), 1 deletion(-)
create mode 100644 test/test/test_rcu_qsbr.c
create mode 100644 test/test/test_rcu_qsbr_perf.c
diff --git a/test/test/Makefile b/test/test/Makefile
index 89949c2bb..6b6dfefc2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -213,6 +213,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/test/test/autotest_data.py b/test/test/autotest_data.py
index 5f87bb94d..c26ec889c 100644
--- a/test/test/autotest_data.py
+++ b/test/test/autotest_data.py
@@ -694,6 +694,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/test/test/meson.build b/test/test/meson.build
index 05e5ddeb0..4df8e337b 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -107,6 +107,8 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -132,7 +134,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -171,6 +174,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'string_autotest',
@@ -236,6 +240,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
]
diff --git a/test/test/test_rcu_qsbr.c b/test/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..ae60614af
--- /dev/null
+++ b/test/test/test_rcu_qsbr.c
@@ -0,0 +1,831 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define RTE_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[RTE_RCU_MAX_LCORE];
+uint8_t num_cores;
+uint16_t num_1qs = 1; /* Number of quiescent states = 1 */
+uint16_t num_2qs = 2; /* Number of quiescent states = 2 */
+uint16_t num_3qs = 3; /* Number of quiescent states = 3 */
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+uint32_t *hash_data[RTE_RCU_MAX_LCORE][TOTAL_ENTRY];
+uint8_t writer_done;
+
+struct rte_rcu_qsbr t[RTE_RCU_MAX_LCORE];
+struct rte_hash *h[RTE_RCU_MAX_LCORE];
+char hash_name[RTE_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > RTE_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", RTE_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_register_thread: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_register_thread(void)
+{
+ printf("\nTest rte_rcu_qsbr_register_thread()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_unregister_thread: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_unregister_thread(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, RTE_RCU_MAX_THREADS, 1};
+
+ printf("\nTest rte_rcu_qsbr_unregister_thread()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+
+ /* Find first disabled core */
+ for (i = 0; i < RTE_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ rte_rcu_qsbr_unregister_thread(&t[0], i);
+
+ /* Test with enabled lcore */
+ rte_rcu_qsbr_unregister_thread(&t[0], enabled_core_ids[0]);
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to RTE_RCU_MAX_THREADS
+ * 3 - thread_id = RTE_RCU_MAX_THREADS - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(&t[0]);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_register_thread(&t[0],
+ (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (RTE_RCU_MAX_THREADS - 10))
+ continue;
+ rte_rcu_qsbr_update(&t[0],
+ (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_update(&t[0], RTE_RCU_MAX_THREADS - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_unregister_thread(&t[0],
+ (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(&t[0], token, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Trigger the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = &t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_update(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_unregister_thread(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_update(temp, enabled_core_ids[3]);
+ return 0;
+}
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(&t[0], 0, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(&t[0], token, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 2), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_unregister_thread(&t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(&t[0], token, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_start(&t[0], 1, &token);
+ RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(&t[0], token, true);
+ RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(&t[1]);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, &t[0]);
+
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_register_thread(&t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, &t[0]);
+ rte_rcu_qsbr_dump(stdout, &t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = &t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_register_thread(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, lcore_id);
+ rte_rcu_qsbr_unregister_thread(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = &t[(writer_type/2) % RTE_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % RTE_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /*
+ * Start the quiescent state query process
+ * Note: Expected Quiescent states kept greater than 1 for test only
+ */
+ rte_rcu_qsbr_start(temp, writer_type + 1, &token);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("\nTest: 1 writer, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n");
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(&t[0]);
+
+ /* Register worker threads on 4 cores */
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ rte_rcu_qsbr_start(&t[0], num_1qs, &token);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+ /* Register worker threads on 4 cores */
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ rte_rcu_qsbr_start(&t[0], num_1qs, &token);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(&t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(&t[0]);
+
+ /* Register worker threads on 4 cores */
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ rte_rcu_qsbr_start(&t[0], num_1qs, &token[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /*
+ * Start the quiescent state query process
+ * Note: num_2qs kept greater than 1 for test only
+ */
+ rte_rcu_qsbr_start(&t[0], num_2qs, &token[1]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /*
+ * Start the quiescent state query process
+ * Note: num_3qs kept greater than 1 for test only
+ */
+ rte_rcu_qsbr_start(&t[0], num_3qs, &token[2]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(&t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, Simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(&t[i]);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Register worker threads on 2 cores */
+ for (i = 0; i < test_cores / 2; i += 2) {
+ rte_rcu_qsbr_register_thread(&t[i / 2], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(&t[i / 2],
+ enabled_core_ids[i + 1]);
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < RTE_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_register_thread() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_unregister_thread() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ /* Functional test cases */
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/test/test/test_rcu_qsbr_perf.c b/test/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..51af4961d
--- /dev/null
+++ b/test/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+uint8_t writer_done;
+
+static struct rte_rcu_qsbr t;
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+extern int test_rcu_qsbr_get_memsize(void);
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(&t, lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(&t, lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_unregister_thread(&t, lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait) rte_rcu_qsbr_start(&t, 1, &token);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(&t, token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: 6 Readers/1 Writer('wait' in qsbr_check == true)\n");
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(&t);
+
+ /* Register worker threads on 6 cores */
+ for (i = 0; i < 6; i++)
+ rte_rcu_qsbr_register_thread(&t, enabled_core_ids[i]);
+
+ /* Reader threads are launched */
+ for (i = 0; i < 6; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: 8 Readers\n");
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(&t);
+
+ /* Register worker threads on 8 cores */
+ for (i = 0; i < 8; i++)
+ rte_rcu_qsbr_register_thread(&t, enabled_core_ids[i]);
+
+ /* Reader threads are launched */
+ for (i = 0; i < 8; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 8 Writers ('wait' in qsbr_check == false)\n");
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(&t);
+
+ /* Writer threads are launched */
+ for (i = 0; i < 8; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v3 3/5] lib/rcu: add dynamic memory allocation capability
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 1/5] " Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 2/5] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-02-22 7:04 ` Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 4/5] test/rcu_qsbr: modify test cases for dynamic memory allocation Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 5/5] lib/rcu: fix the size of register thread ID array size Honnappa Nagarahalli
4 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-02-22 7:04 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, malvika.gupta, nd
rte_rcu_qsbr_get_memsize API is introduced. This will allow the user
to controll the amount of memory used based on the maximum
number of threads present in the application.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_rcu/rte_rcu_qsbr.c | 51 ++++++++++++---
lib/librte_rcu/rte_rcu_qsbr.h | 118 +++++++++++++++++++++-------------
2 files changed, 118 insertions(+), 51 deletions(-)
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
index 3c2577ee2..02464fdba 100644
--- a/lib/librte_rcu/rte_rcu_qsbr.c
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -21,11 +21,39 @@
#include "rte_rcu_qsbr.h"
+/* Get the memory size of QSBR variable */
+unsigned int __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ int n;
+ ssize_t sz;
+
+ RTE_ASSERT(max_threads == 0);
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of the registered thread ID bitmap array */
+ n = RTE_ALIGN(max_threads, RTE_QSBR_THRID_ARRAY_ELM_SIZE);
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(n);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
/* Initialize a quiescent state variable */
void __rte_experimental
-rte_rcu_qsbr_init(struct rte_rcu_qsbr *v)
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
{
- memset(v, 0, sizeof(struct rte_rcu_qsbr));
+ RTE_ASSERT(v == NULL);
+
+ memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
+ v->m_threads = max_threads;
+ v->ma_threads = RTE_ALIGN(max_threads, RTE_QSBR_THRID_ARRAY_ELM_SIZE);
+
+ v->num_elems = v->ma_threads/RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->thrid_array_size = RTE_QSBR_THRID_ARRAY_SIZE(v->ma_threads);
}
/* Dump the details of a single quiescent state variable to a file. */
@@ -39,9 +67,15 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
fprintf(f, "\nQuiescent State Variable @%p\n", v);
+ fprintf(f, " QS variable memory size = %u\n",
+ rte_rcu_qsbr_get_memsize(v->m_threads));
+ fprintf(f, " Given # max threads = %u\n", v->m_threads);
+ fprintf(f, " Adjusted # max threads = %u\n", v->ma_threads);
+
fprintf(f, " Registered thread ID mask = 0x");
- for (i = 0; i < RTE_QSBR_BIT_MAP_ELEMS; i++)
- fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
__ATOMIC_ACQUIRE));
fprintf(f, "\n");
@@ -49,14 +83,15 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
__atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
fprintf(f, "Quiescent State Counts for readers:\n");
- for (i = 0; i < RTE_QSBR_BIT_MAP_ELEMS; i++) {
- bmap = __atomic_load_n(&v->reg_thread_id[i],
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
__ATOMIC_ACQUIRE);
while (bmap) {
t = __builtin_ctzl(bmap);
fprintf(f, "thread ID = %d, count = %lu\n", t,
- __atomic_load_n(&v->w[i].cnt,
- __ATOMIC_RELAXED));
+ __atomic_load_n(
+ &RTE_QSBR_CNT_ARRAY_ELM(v, i)->cnt,
+ __ATOMIC_RELAXED));
bmap &= ~(1UL << t);
}
}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
index 53e00488b..21fa2c198 100644
--- a/lib/librte_rcu/rte_rcu_qsbr.h
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -29,46 +29,71 @@ extern "C" {
#include <rte_lcore.h>
#include <rte_debug.h>
-/**< Maximum number of reader threads supported. */
-#define RTE_RCU_MAX_THREADS 128
-
-#if !RTE_IS_POWER_OF_2(RTE_RCU_MAX_THREADS)
-#error RTE_RCU_MAX_THREADS must be a power of 2
-#endif
-
-/**< Number of array elements required for the bit-map */
-#define RTE_QSBR_BIT_MAP_ELEMS (RTE_RCU_MAX_THREADS/(sizeof(uint64_t) * 8))
-
-/* Thread IDs are stored as a bitmap of 64b element array. Given thread id
- * needs to be converted to index into the array and the id within
- * the array element.
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+/* Thread ID array size
+ * @param ma_threads
+ * num of threads aligned to 64
*/
-#define RTE_QSBR_THR_INDEX_SHIFT 6
-#define RTE_QSBR_THR_ID_MASK 0x3f
+#define RTE_QSBR_THRID_ARRAY_SIZE(ma_threads) \
+ RTE_ALIGN((ma_threads) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *)(v + 1) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
/* Worker thread counter */
struct rte_rcu_qsbr_cnt {
uint64_t cnt; /**< Quiescent state counter. */
} __rte_cache_aligned;
+#define RTE_QSBR_CNT_ARRAY_ELM(v, i) ((struct rte_rcu_qsbr_cnt *) \
+ ((uint8_t *)(v + 1) + v->thrid_array_size) + i)
+
/**
* RTE thread Quiescent State structure.
+ * The following data, which is dependent on the maximum number of
+ * threads using this variable, is stored in memory immediately
+ * following this structure.
+ *
+ * 1) registered thread ID bitmap array
+ * This is a uint64_t array enough to hold 'ma_threads' number
+ * of thread IDs.
+ * 2) quiescent state counter array
+ * This is an array of 'struct rte_rcu_qsbr_cnt' with
+ * 'm_threads' number of elements.
*/
struct rte_rcu_qsbr {
- uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS] __rte_cache_aligned;
- /**< Registered reader thread IDs - reader threads reporting
- * on this QS variable represented in a bit map.
- */
-
uint64_t token __rte_cache_aligned;
/**< Counter to allow for multiple simultaneous QS queries */
- struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS] __rte_cache_aligned;
- /**< QS counter for each reader thread, counts upto
- * current value of token.
- */
+ uint32_t thrid_array_size __rte_cache_aligned;
+ /**< Registered thread ID bitmap array size in bytes */
+ uint32_t num_elems;
+ /**< Number of elements in the thread ID array */
+
+ uint32_t m_threads;
+ /**< Maximum number of threads this RCU variable will use */
+ uint32_t ma_threads;
+ /**< Maximum number of threads aligned to 32 */
} __rte_cache_aligned;
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State (QS) variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting QS on this variable.
+ * @return
+ * Size of memory in bytes required for this QS variable.
+ */
+unsigned int __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
@@ -77,10 +102,12 @@ struct rte_rcu_qsbr {
*
* @param v
* QS variable
+ * @param max_threads
+ * Maximum number of threads reporting QS on this variable.
*
*/
void __rte_experimental
-rte_rcu_qsbr_init(struct rte_rcu_qsbr *v);
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
/**
* @warning
@@ -108,24 +135,25 @@ rte_rcu_qsbr_register_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
{
unsigned int i, id;
- RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
- id = thread_id & RTE_QSBR_THR_ID_MASK;
- i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
/* Worker thread has to count the quiescent states
* only from the current value of token.
* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
* 'cnt' (64b) is accessed atomically.
*/
- __atomic_store_n(&v->w[thread_id].cnt,
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
__atomic_load_n(&v->token, __ATOMIC_ACQUIRE),
__ATOMIC_RELAXED);
/* Release the store to initial TQS count so that readers
* can use it immediately after this function returns.
*/
- __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
+ __atomic_fetch_or(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ 1UL << id, __ATOMIC_RELEASE);
}
/**
@@ -151,16 +179,16 @@ rte_rcu_qsbr_unregister_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
{
unsigned int i, id;
- RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
- id = thread_id & RTE_QSBR_THR_ID_MASK;
- i = thread_id >> RTE_QSBR_THR_INDEX_SHIFT;
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
/* Make sure the removal of the thread from the list of
* reporting threads is visible before the thread
* does anything else.
*/
- __atomic_fetch_and(&v->reg_thread_id[i],
+ __atomic_fetch_and(RTE_QSBR_THRID_ARRAY_ELM(v, i),
~(1UL << id), __ATOMIC_RELEASE);
}
@@ -212,7 +240,7 @@ rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id)
{
uint64_t t;
- RTE_ASSERT(v == NULL || thread_id >= RTE_RCU_MAX_THREADS);
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
/* Load the token before the reader thread loads any other
* (lock-free) data structure. This ensures that updates
@@ -228,8 +256,10 @@ rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id)
* Copy the current token value. This will end grace period
* of multiple concurrent writers.
*/
- if (__atomic_load_n(&v->w[thread_id].cnt, __ATOMIC_RELAXED) != t)
- __atomic_store_n(&v->w[thread_id].cnt, t, __ATOMIC_RELAXED);
+ if (__atomic_load_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ __ATOMIC_RELAXED) != t)
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ t, __ATOMIC_RELAXED);
}
/**
@@ -268,18 +298,20 @@ rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
/* Load the current registered thread bit map before
* loading the reader thread quiescent state counters.
*/
- bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
- id = i << RTE_QSBR_THR_INDEX_SHIFT;
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
while (bmap) {
j = __builtin_ctzl(bmap);
-/* printf ("Status check: token = %lu, wait = %d, Bit Map = 0x%x, Thread ID = %d\n", t, wait, bmap, id+j); */
+/* printf ("Status check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d\n", t, wait, bmap, id+j); */
/* __atomic_load_n(cnt, __ATOMIC_RELAXED)
* is used to ensure 'cnt' (64b) is accessed
* atomically.
*/
- if (unlikely(__atomic_load_n(&v->w[id + j].cnt,
+ if (unlikely(__atomic_load_n(
+ &RTE_QSBR_CNT_ARRAY_ELM(v, id + j)->cnt,
__ATOMIC_RELAXED) < t)) {
/* printf ("Status not in QS: token = %lu, Wait = %d, Thread QS cnt = %lu, Thread ID = %d\n", t, wait, RTE_QSBR_CNT_ARRAY_ELM(v, id + j)->cnt, id+j); */
@@ -292,7 +324,7 @@ rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
* Re-read the bitmap.
*/
bmap = __atomic_load_n(
- &v->reg_thread_id[i],
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
__ATOMIC_ACQUIRE);
continue;
@@ -302,7 +334,7 @@ rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
}
i++;
- } while (i < RTE_QSBR_BIT_MAP_ELEMS);
+ } while (i < v->num_elems);
return 1;
}
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v3 4/5] test/rcu_qsbr: modify test cases for dynamic memory allocation
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 3/5] lib/rcu: add dynamic memory allocation capability Honnappa Nagarahalli
@ 2019-02-22 7:04 ` Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 5/5] lib/rcu: fix the size of register thread ID array size Honnappa Nagarahalli
4 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-02-22 7:04 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, malvika.gupta, nd
Modify the test cases to allocate the QSBR variable dynamically.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
test/test/test_rcu_qsbr.c | 225 ++++++++++++++++++---------------
test/test/test_rcu_qsbr_perf.c | 33 ++---
2 files changed, 144 insertions(+), 114 deletions(-)
diff --git a/test/test/test_rcu_qsbr.c b/test/test/test_rcu_qsbr.c
index ae60614af..09744279f 100644
--- a/test/test/test_rcu_qsbr.c
+++ b/test/test/test_rcu_qsbr.c
@@ -15,7 +15,7 @@
#include "test.h"
/* Check condition and return an error if true. */
-#define RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
if (cond) { \
printf("ERROR file %s, line %d: " str "\n", __FILE__, \
__LINE__, ##__VA_ARGS__); \
@@ -23,8 +23,8 @@
} \
} while (0)
-#define RTE_RCU_MAX_LCORE 128
-uint16_t enabled_core_ids[RTE_RCU_MAX_LCORE];
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
uint8_t num_cores;
uint16_t num_1qs = 1; /* Number of quiescent states = 1 */
uint16_t num_2qs = 2; /* Number of quiescent states = 2 */
@@ -33,20 +33,20 @@ uint16_t num_3qs = 3; /* Number of quiescent states = 3 */
static uint32_t *keys;
#define TOTAL_ENTRY (1024 * 8)
#define COUNTER_VALUE 4096
-uint32_t *hash_data[RTE_RCU_MAX_LCORE][TOTAL_ENTRY];
+uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
uint8_t writer_done;
-struct rte_rcu_qsbr t[RTE_RCU_MAX_LCORE];
-struct rte_hash *h[RTE_RCU_MAX_LCORE];
-char hash_name[RTE_RCU_MAX_LCORE][8];
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
static inline int
get_enabled_cores_mask(void)
{
uint16_t core_id;
uint32_t max_cores = rte_lcore_count();
- if (max_cores > RTE_RCU_MAX_LCORE) {
- printf("Number of cores exceed %d\n", RTE_RCU_MAX_LCORE);
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
return -1;
}
@@ -60,6 +60,30 @@ get_enabled_cores_mask(void)
return 0;
}
+/*
+ * rte_rcu_qsbr_register_thread: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_register_thread()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
/*
* rte_rcu_qsbr_register_thread: Add a reader thread, to the list of threads
* reporting their quiescent state on a QS variable.
@@ -69,9 +93,9 @@ test_rcu_qsbr_register_thread(void)
{
printf("\nTest rte_rcu_qsbr_register_thread()\n");
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[0]);
return 0;
}
@@ -84,68 +108,68 @@ test_rcu_qsbr_unregister_thread(void)
{
int i, j, ret;
uint64_t token;
- uint8_t num_threads[3] = {1, RTE_RCU_MAX_THREADS, 1};
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
printf("\nTest rte_rcu_qsbr_unregister_thread()\n");
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[0]);
/* Find first disabled core */
- for (i = 0; i < RTE_RCU_MAX_LCORE; i++) {
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
if (enabled_core_ids[i] == 0)
break;
}
/* Test with disabled lcore */
- rte_rcu_qsbr_unregister_thread(&t[0], i);
+ rte_rcu_qsbr_unregister_thread(t[0], i);
/* Test with enabled lcore */
- rte_rcu_qsbr_unregister_thread(&t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_unregister_thread(t[0], enabled_core_ids[0]);
/*
* Test with different thread_ids:
* 1 - thread_id = 0
- * 2 - All possible thread_ids, from 0 to RTE_RCU_MAX_THREADS
- * 3 - thread_id = RTE_RCU_MAX_THREADS - 1
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
*/
for (j = 0; j < 3; j++) {
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
for (i = 0; i < num_threads[j]; i++)
- rte_rcu_qsbr_register_thread(&t[0],
- (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+ rte_rcu_qsbr_register_thread(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
- rte_rcu_qsbr_start(&t[0], 1, &token);
- RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ rte_rcu_qsbr_start(t[0], 1, &token);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
/* Update quiescent state counter */
for (i = 0; i < num_threads[j]; i++) {
/* Skip one update */
- if (i == (RTE_RCU_MAX_THREADS - 10))
+ if (i == (TEST_RCU_MAX_LCORE - 10))
continue;
- rte_rcu_qsbr_update(&t[0],
- (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+ rte_rcu_qsbr_update(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
}
if (j == 1) {
/* Validate the updates */
- ret = rte_rcu_qsbr_check(&t[0], token, false);
- RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
/* Update the previously skipped thread */
- rte_rcu_qsbr_update(&t[0], RTE_RCU_MAX_THREADS - 10);
+ rte_rcu_qsbr_update(t[0], TEST_RCU_MAX_LCORE - 10);
}
/* Validate the updates */
- ret = rte_rcu_qsbr_check(&t[0], token, false);
- RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
for (i = 0; i < num_threads[j]; i++)
- rte_rcu_qsbr_unregister_thread(&t[0],
- (j == 2) ? (RTE_RCU_MAX_THREADS - 1) : i);
+ rte_rcu_qsbr_unregister_thread(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
/* Check with no thread registered */
- ret = rte_rcu_qsbr_check(&t[0], token, true);
- RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
}
return 0;
}
@@ -162,13 +186,13 @@ test_rcu_qsbr_start(void)
printf("\nTest rte_rcu_qsbr_start()\n");
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
for (i = 0; i < 3; i++)
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[i]);
- rte_rcu_qsbr_start(&t[0], 1, &token);
- RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ rte_rcu_qsbr_start(t[0], 1, &token);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
return 0;
}
@@ -177,7 +201,7 @@ test_rcu_qsbr_check_reader(void *arg)
{
struct rte_rcu_qsbr *temp;
uint8_t read_type = (uint8_t)((uintptr_t)arg);
- temp = &t[read_type];
+ temp = t[read_type];
/* Update quiescent state counter */
rte_rcu_qsbr_update(temp, enabled_core_ids[0]);
@@ -198,50 +222,50 @@ test_rcu_qsbr_check(void)
printf("\nTest rte_rcu_qsbr_check()\n");
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
- rte_rcu_qsbr_start(&t[0], 1, &token);
- RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ rte_rcu_qsbr_start(t[0], 1, &token);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
- ret = rte_rcu_qsbr_check(&t[0], 0, false);
- RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
- ret = rte_rcu_qsbr_check(&t[0], token, true);
- RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
for (i = 0; i < 3; i++)
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[i]);
- ret = rte_rcu_qsbr_check(&t[0], token, false);
- RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
- rte_rcu_qsbr_start(&t[0], 1, &token);
- RCU_QSBR_RETURN_IF_ERROR((token != 2), "QSBR Start");
+ rte_rcu_qsbr_start(t[0], 1, &token);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((token != 2), "QSBR Start");
- ret = rte_rcu_qsbr_check(&t[0], token, false);
- RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Non-blocking QSBR check");
for (i = 0; i < 3; i++)
- rte_rcu_qsbr_unregister_thread(&t[0], enabled_core_ids[i]);
+ rte_rcu_qsbr_unregister_thread(t[0], enabled_core_ids[i]);
- ret = rte_rcu_qsbr_check(&t[0], token, true);
- RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
for (i = 0; i < 4; i++)
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[i]);
- rte_rcu_qsbr_start(&t[0], 1, &token);
- RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
+ rte_rcu_qsbr_start(t[0], 1, &token);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((token != 1), "QSBR Start");
rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
enabled_core_ids[0]);
rte_eal_mp_wait_lcore();
- ret = rte_rcu_qsbr_check(&t[0], token, true);
- RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
return 0;
}
@@ -256,19 +280,19 @@ test_rcu_qsbr_dump(void)
printf("\nTest rte_rcu_qsbr_dump()\n");
- rte_rcu_qsbr_init(&t[0]);
- rte_rcu_qsbr_init(&t[1]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
/* QS variable with 0 core mask */
- rte_rcu_qsbr_dump(stdout, &t[0]);
+ rte_rcu_qsbr_dump(stdout, t[0]);
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[0]);
for (i = 1; i < 3; i++)
- rte_rcu_qsbr_register_thread(&t[1], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[1], enabled_core_ids[i]);
- rte_rcu_qsbr_dump(stdout, &t[0]);
- rte_rcu_qsbr_dump(stdout, &t[1]);
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
printf("\n");
return 0;
}
@@ -283,7 +307,7 @@ test_rcu_qsbr_reader(void *arg)
uint8_t read_type = (uint8_t)((uintptr_t)arg);
uint32_t *pdata;
- temp = &t[read_type];
+ temp = t[read_type];
hash = h[read_type];
do {
@@ -313,8 +337,8 @@ test_rcu_qsbr_writer(void *arg)
struct rte_hash *hash = NULL;
uint8_t writer_type = (uint8_t)((uintptr_t)arg);
- temp = &t[(writer_type/2) % RTE_RCU_MAX_LCORE];
- hash = h[(writer_type/2) % RTE_RCU_MAX_LCORE];
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
/* Delete element from the shared data structure */
pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
@@ -330,12 +354,12 @@ test_rcu_qsbr_writer(void *arg)
rte_rcu_qsbr_start(temp, writer_type + 1, &token);
/* Check the quiescent state status */
rte_rcu_qsbr_check(temp, token, true);
- if (*hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
[writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
- *hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
[writer_type % TOTAL_ENTRY] != 0) {
printf("Reader did not complete #%d = %d\t", writer_type,
- *hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
[writer_type % TOTAL_ENTRY]);
return -1;
}
@@ -345,9 +369,9 @@ test_rcu_qsbr_writer(void *arg)
keys[writer_type % TOTAL_ENTRY]);
return -1;
}
- rte_free(hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
[writer_type % TOTAL_ENTRY]);
- hash_data[(writer_type/2) % RTE_RCU_MAX_LCORE]
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
[writer_type % TOTAL_ENTRY] = NULL;
return 0;
@@ -419,11 +443,11 @@ test_rcu_qsbr_sw_sv_1qs(void)
"Blocking QSBR Check\n");
/* QS variable is initialized */
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
/* Register worker threads on 4 cores */
for (i = 0; i < 4; i++)
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[i]);
/* Shared data structure created */
h[0] = init_hash(0);
@@ -445,10 +469,10 @@ test_rcu_qsbr_sw_sv_1qs(void)
goto error;
}
/* Start the quiescent state query process */
- rte_rcu_qsbr_start(&t[0], num_1qs, &token);
+ rte_rcu_qsbr_start(t[0], num_1qs, &token);
/* Check the quiescent state status */
- rte_rcu_qsbr_check(&t[0], token, true);
+ rte_rcu_qsbr_check(t[0], token, true);
if (*hash_data[0][i] != COUNTER_VALUE &&
*hash_data[0][i] != 0) {
printf("Reader did not complete #%d = %d\n", i,
@@ -504,10 +528,10 @@ test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
printf("Test: 1 writer, 1 QSBR variable, 1 QSBR Query, "
"Non-Blocking QSBR check\n");
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
/* Register worker threads on 4 cores */
for (i = 0; i < 4; i++)
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[i]);
/* Shared data structure created */
h[0] = init_hash(0);
@@ -529,11 +553,11 @@ test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
goto error;
}
/* Start the quiescent state query process */
- rte_rcu_qsbr_start(&t[0], num_1qs, &token);
+ rte_rcu_qsbr_start(t[0], num_1qs, &token);
/* Check the quiescent state status */
do {
- ret = rte_rcu_qsbr_check(&t[0], token, false);
+ ret = rte_rcu_qsbr_check(t[0], token, false);
} while (ret == 0);
if (*hash_data[0][i] != COUNTER_VALUE &&
*hash_data[0][i] != 0) {
@@ -588,11 +612,11 @@ test_rcu_qsbr_sw_sv_3qs(void)
printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
- rte_rcu_qsbr_init(&t[0]);
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
/* Register worker threads on 4 cores */
for (i = 0; i < 4; i++)
- rte_rcu_qsbr_register_thread(&t[0], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[0], enabled_core_ids[i]);
/* Shared data structure created */
h[0] = init_hash(0);
@@ -613,7 +637,7 @@ test_rcu_qsbr_sw_sv_3qs(void)
goto error;
}
/* Start the quiescent state query process */
- rte_rcu_qsbr_start(&t[0], num_1qs, &token[0]);
+ rte_rcu_qsbr_start(t[0], num_1qs, &token[0]);
/* Delete element from the shared data structure */
pos[1] = rte_hash_del_key(h[0], keys + 3);
@@ -625,7 +649,7 @@ test_rcu_qsbr_sw_sv_3qs(void)
* Start the quiescent state query process
* Note: num_2qs kept greater than 1 for test only
*/
- rte_rcu_qsbr_start(&t[0], num_2qs, &token[1]);
+ rte_rcu_qsbr_start(t[0], num_2qs, &token[1]);
/* Delete element from the shared data structure */
pos[2] = rte_hash_del_key(h[0], keys + 6);
@@ -637,10 +661,10 @@ test_rcu_qsbr_sw_sv_3qs(void)
* Start the quiescent state query process
* Note: num_3qs kept greater than 1 for test only
*/
- rte_rcu_qsbr_start(&t[0], num_3qs, &token[2]);
+ rte_rcu_qsbr_start(t[0], num_3qs, &token[2]);
/* Check the quiescent state status */
- rte_rcu_qsbr_check(&t[0], token[0], true);
+ rte_rcu_qsbr_check(t[0], token[0], true);
if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
goto error;
@@ -654,7 +678,7 @@ test_rcu_qsbr_sw_sv_3qs(void)
hash_data[0][0] = NULL;
/* Check the quiescent state status */
- rte_rcu_qsbr_check(&t[0], token[1], true);
+ rte_rcu_qsbr_check(t[0], token[1], true);
if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
goto error;
@@ -668,7 +692,7 @@ test_rcu_qsbr_sw_sv_3qs(void)
hash_data[0][3] = NULL;
/* Check the quiescent state status */
- rte_rcu_qsbr_check(&t[0], token[2], true);
+ rte_rcu_qsbr_check(t[0], token[2], true);
if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
goto error;
@@ -722,7 +746,7 @@ test_rcu_qsbr_mw_mv_mqs(void)
, test_cores / 2, test_cores / 4);
for (i = 0; i < num_cores / 4; i++) {
- rte_rcu_qsbr_init(&t[i]);
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
h[i] = init_hash(i);
if (h[i] == NULL) {
printf("Hash init failed\n");
@@ -732,8 +756,8 @@ test_rcu_qsbr_mw_mv_mqs(void)
/* Register worker threads on 2 cores */
for (i = 0; i < test_cores / 2; i += 2) {
- rte_rcu_qsbr_register_thread(&t[i / 2], enabled_core_ids[i]);
- rte_rcu_qsbr_register_thread(&t[i / 2],
+ rte_rcu_qsbr_register_thread(t[i / 2], enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t[i / 2],
enabled_core_ids[i + 1]);
}
@@ -776,7 +800,7 @@ test_rcu_qsbr_mw_mv_mqs(void)
for (i = 0; i < num_cores / 4; i++)
rte_hash_free(h[i]);
rte_free(keys);
- for (j = 0; j < RTE_RCU_MAX_LCORE; j++)
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
for (i = 0; i < TOTAL_ENTRY; i++)
rte_free(hash_data[j][i]);
@@ -788,6 +812,9 @@ test_rcu_qsbr_main(void)
{
if (get_enabled_cores_mask() != 0)
return -1;
+
+test_rcu_qsbr_get_memsize();
+
/* Error-checking test cases */
if (test_rcu_qsbr_register_thread() < 0)
goto test_fail;
diff --git a/test/test/test_rcu_qsbr_perf.c b/test/test/test_rcu_qsbr_perf.c
index 51af4961d..89c5030bb 100644
--- a/test/test/test_rcu_qsbr_perf.c
+++ b/test/test/test_rcu_qsbr_perf.c
@@ -24,7 +24,7 @@ uint8_t num_cores;
uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
uint8_t writer_done;
-static struct rte_rcu_qsbr t;
+static struct rte_rcu_qsbr *t;
struct rte_hash *h[TEST_RCU_MAX_LCORE];
char hash_name[TEST_RCU_MAX_LCORE][8];
static rte_atomic64_t updates, checks;
@@ -70,13 +70,13 @@ test_rcu_qsbr_reader_perf(void *arg)
if (writer_present) {
while (!writer_done) {
/* Update quiescent state counter */
- rte_rcu_qsbr_update(&t, lcore_id);
+ rte_rcu_qsbr_update(t, lcore_id);
loop_cnt++;
}
} else {
while (loop_cnt < 100000000) {
/* Update quiescent state counter */
- rte_rcu_qsbr_update(&t, lcore_id);
+ rte_rcu_qsbr_update(t, lcore_id);
loop_cnt++;
}
}
@@ -86,7 +86,7 @@ test_rcu_qsbr_reader_perf(void *arg)
rte_atomic64_add(&updates, loop_cnt);
/* Unregister before exiting to avoid writer from waiting */
- rte_rcu_qsbr_unregister_thread(&t, lcore_id);
+ rte_rcu_qsbr_unregister_thread(t, lcore_id);
return 0;
}
@@ -103,10 +103,10 @@ test_rcu_qsbr_writer_perf(void *arg)
do {
/* Start the quiescent state query process */
- if (wait) rte_rcu_qsbr_start(&t, 1, &token);
+ if (wait) rte_rcu_qsbr_start(t, 1, &token);
/* Check quiescent state status */
- rte_rcu_qsbr_check(&t, token, wait);
+ rte_rcu_qsbr_check(t, token, wait);
loop_cnt++;
} while (loop_cnt < 20000000);
@@ -134,11 +134,11 @@ test_rcu_qsbr_perf(void)
printf("\nPerf Test: 6 Readers/1 Writer('wait' in qsbr_check == true)\n");
/* QS variable is initialized */
- rte_rcu_qsbr_init(&t);
+ rte_rcu_qsbr_init(t, TEST_RCU_MAX_LCORE);
/* Register worker threads on 6 cores */
for (i = 0; i < 6; i++)
- rte_rcu_qsbr_register_thread(&t, enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t, enabled_core_ids[i]);
/* Reader threads are launched */
for (i = 0; i < 6; i++)
@@ -179,17 +179,15 @@ test_rcu_qsbr_rperf(void)
rte_atomic64_clear(&updates);
rte_atomic64_clear(&update_cycles);
- rte_atomic64_clear(&checks);
- rte_atomic64_clear(&check_cycles);
printf("\nPerf Test: 8 Readers\n");
/* QS variable is initialized */
- rte_rcu_qsbr_init(&t);
+ rte_rcu_qsbr_init(t, TEST_RCU_MAX_LCORE);
/* Register worker threads on 8 cores */
for (i = 0; i < 8; i++)
- rte_rcu_qsbr_register_thread(&t, enabled_core_ids[i]);
+ rte_rcu_qsbr_register_thread(t, enabled_core_ids[i]);
/* Reader threads are launched */
for (i = 0; i < 8; i++)
@@ -200,6 +198,8 @@ test_rcu_qsbr_rperf(void)
rte_eal_mp_wait_lcore();
printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Total update cycles = %ld\n",
+ rte_atomic64_read(&update_cycles));
printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
rte_atomic64_read(&update_cycles) /
(rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
@@ -216,15 +216,13 @@ test_rcu_qsbr_wperf(void)
{
int i;
- rte_atomic64_clear(&updates);
- rte_atomic64_clear(&update_cycles);
rte_atomic64_clear(&checks);
rte_atomic64_clear(&check_cycles);
printf("\nPerf test: 8 Writers ('wait' in qsbr_check == false)\n");
/* QS variable is initialized */
- rte_rcu_qsbr_init(&t);
+ rte_rcu_qsbr_init(t, TEST_RCU_MAX_LCORE);
/* Writer threads are launched */
for (i = 0; i < 8; i++)
@@ -245,6 +243,8 @@ test_rcu_qsbr_wperf(void)
static int
test_rcu_qsbr_main(void)
{
+ uint32_t sz;
+
rte_atomic64_init(&updates);
rte_atomic64_init(&update_cycles);
rte_atomic64_init(&checks);
@@ -253,6 +253,9 @@ test_rcu_qsbr_main(void)
if (get_enabled_cores_mask() != 0)
return -1;
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t = (struct rte_rcu_qsbr *)rte_zmalloc("rcu", sz, RTE_CACHE_LINE_SIZE);
+
if (test_rcu_qsbr_perf() < 0)
goto test_fail;
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [RFC v3 5/5] lib/rcu: fix the size of register thread ID array size
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (3 preceding siblings ...)
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 4/5] test/rcu_qsbr: modify test cases for dynamic memory allocation Honnappa Nagarahalli
@ 2019-02-22 7:04 ` Honnappa Nagarahalli
4 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-02-22 7:04 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev, honnappa.nagarahalli
Cc: gavin.hu, dharmik.thakkar, malvika.gupta, nd
Keeping the register thread ID size dependent on the max threads
is resulting in performance drops due to address calculations at
run time. Fixing the size of the thread ID registration array
reduces the complexity of address calculation. This change
fixes the maximum number of threads supported to 512(1 cache line
size of 64B). However, the memory required for QS counters is still
dependent on the max threads parameter. This change provides both
flexibility and addresses performance as well.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_rcu/rte_rcu_qsbr.c | 13 ++-----------
lib/librte_rcu/rte_rcu_qsbr.h | 29 ++++++++++-------------------
2 files changed, 12 insertions(+), 30 deletions(-)
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
index 02464fdba..3cff82121 100644
--- a/lib/librte_rcu/rte_rcu_qsbr.c
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -25,17 +25,12 @@
unsigned int __rte_experimental
rte_rcu_qsbr_get_memsize(uint32_t max_threads)
{
- int n;
ssize_t sz;
RTE_ASSERT(max_threads == 0);
sz = sizeof(struct rte_rcu_qsbr);
- /* Add the size of the registered thread ID bitmap array */
- n = RTE_ALIGN(max_threads, RTE_QSBR_THRID_ARRAY_ELM_SIZE);
- sz += RTE_QSBR_THRID_ARRAY_SIZE(n);
-
/* Add the size of quiescent state counter array */
sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
@@ -51,9 +46,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
v->m_threads = max_threads;
v->ma_threads = RTE_ALIGN(max_threads, RTE_QSBR_THRID_ARRAY_ELM_SIZE);
-
v->num_elems = v->ma_threads/RTE_QSBR_THRID_ARRAY_ELM_SIZE;
- v->thrid_array_size = RTE_QSBR_THRID_ARRAY_SIZE(v->ma_threads);
}
/* Dump the details of a single quiescent state variable to a file. */
@@ -74,8 +67,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
fprintf(f, " Registered thread ID mask = 0x");
for (i = 0; i < v->num_elems; i++)
- fprintf(f, "%lx", __atomic_load_n(
- RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
__ATOMIC_ACQUIRE));
fprintf(f, "\n");
@@ -84,8 +76,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
fprintf(f, "Quiescent State Counts for readers:\n");
for (i = 0; i < v->num_elems; i++) {
- bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
- __ATOMIC_ACQUIRE);
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
while (bmap) {
t = __builtin_ctzl(bmap);
fprintf(f, "thread ID = %d, count = %lu\n", t,
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
index 21fa2c198..1147f11f2 100644
--- a/lib/librte_rcu/rte_rcu_qsbr.h
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -33,14 +33,9 @@ extern "C" {
* Given thread id needs to be converted to index into the array and
* the id within the array element.
*/
-/* Thread ID array size
- * @param ma_threads
- * num of threads aligned to 64
- */
-#define RTE_QSBR_THRID_ARRAY_SIZE(ma_threads) \
- RTE_ALIGN((ma_threads) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_RCU_MAX_THREADS 512
+#define RTE_QSBR_THRID_ARRAY_ELEMS (RTE_RCU_MAX_THREADS/(sizeof(uint64_t) * 8))
#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
-#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *)(v + 1) + i)
#define RTE_QSBR_THRID_INDEX_SHIFT 6
#define RTE_QSBR_THRID_MASK 0x3f
@@ -49,8 +44,7 @@ struct rte_rcu_qsbr_cnt {
uint64_t cnt; /**< Quiescent state counter. */
} __rte_cache_aligned;
-#define RTE_QSBR_CNT_ARRAY_ELM(v, i) ((struct rte_rcu_qsbr_cnt *) \
- ((uint8_t *)(v + 1) + v->thrid_array_size) + i)
+#define RTE_QSBR_CNT_ARRAY_ELM(v, i) (((struct rte_rcu_qsbr_cnt *)(v + 1)) + i)
/**
* RTE thread Quiescent State structure.
@@ -69,15 +63,14 @@ struct rte_rcu_qsbr {
uint64_t token __rte_cache_aligned;
/**< Counter to allow for multiple simultaneous QS queries */
- uint32_t thrid_array_size __rte_cache_aligned;
- /**< Registered thread ID bitmap array size in bytes */
- uint32_t num_elems;
+ uint32_t num_elems __rte_cache_aligned;
/**< Number of elements in the thread ID array */
-
uint32_t m_threads;
/**< Maximum number of threads this RCU variable will use */
uint32_t ma_threads;
/**< Maximum number of threads aligned to 32 */
+
+ uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS] __rte_cache_aligned;
} __rte_cache_aligned;
/**
@@ -152,8 +145,7 @@ rte_rcu_qsbr_register_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
/* Release the store to initial TQS count so that readers
* can use it immediately after this function returns.
*/
- __atomic_fetch_or(RTE_QSBR_THRID_ARRAY_ELM(v, i),
- 1UL << id, __ATOMIC_RELEASE);
+ __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
}
/**
@@ -188,7 +180,7 @@ rte_rcu_qsbr_unregister_thread(struct rte_rcu_qsbr *v, unsigned int thread_id)
* reporting threads is visible before the thread
* does anything else.
*/
- __atomic_fetch_and(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __atomic_fetch_and(&v->reg_thread_id[i],
~(1UL << id), __ATOMIC_RELEASE);
}
@@ -298,8 +290,7 @@ rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
/* Load the current registered thread bit map before
* loading the reader thread quiescent state counters.
*/
- bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
- __ATOMIC_ACQUIRE);
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
id = i << RTE_QSBR_THRID_INDEX_SHIFT;
while (bmap) {
@@ -324,7 +315,7 @@ rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
* Re-read the bitmap.
*/
bmap = __atomic_load_n(
- RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &v->reg_thread_id[i],
__ATOMIC_ACQUIRE);
continue;
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [RFC v2 1/2] rcu: add RCU library supporting QSBR mechanism
2019-01-24 18:05 ` Ananyev, Konstantin
@ 2019-02-22 7:07 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-02-22 7:07 UTC (permalink / raw)
To: Ananyev, Konstantin, dev, stephen, paulmck
Cc: Gavin Hu (Arm Technology China),
Dharmik Thakkar, nd, Honnappa Nagarahalli, nd
> > <snip>
> >
> > > > > > > > +/**
> > > > > > > > + * RTE thread Quiescent State structure.
> > > > > > > > + */
> > > > > > > > +struct rte_rcu_qsbr {
> > > > > > > > + uint64_t reg_thread_id[RTE_QSBR_BIT_MAP_ELEMS]
> > > > > > > __rte_cache_aligned;
> > > > > > > > + /**< Registered reader thread IDs - reader threads reporting
> > > > > > > > + * on this QS variable represented in a bit map.
> > > > > > > > + */
> > > > > > > > +
> > > > > > > > + uint64_t token __rte_cache_aligned;
> > > > > > > > + /**< Counter to allow for multiple simultaneous QS
> > > > > > > > +queries */
> > > > > > > > +
> > > > > > > > + struct rte_rcu_qsbr_cnt w[RTE_RCU_MAX_THREADS]
> > > > > > > __rte_cache_aligned;
> > > > > > > > + /**< QS counter for each reader thread, counts upto
> > > > > > > > + * current value of token.
> > > > > > >
> > > > > > > As I understand you decided to stick with neutral thread_id
> > > > > > > and let user define what exactly thread_id is (lcore, syste,
> > > > > > > thread id, something
> > > > > else)?
> > > > > > Yes, that is correct. I will reply to the other thread to
> > > > > > continue the
> > > discussion.
> > > > > >
> > > > > > > If so, can you probably get rid of RTE_RCU_MAX_THREADS
> limitation?
> > > > > > I am not seeing this as a limitation. The user can change this
> > > > > > if required. May
> > > > > be I should change it as follows:
> > > > > > #ifndef RTE_RCU_MAX_THREADS
> > > > > > #define RTE_RCU_MAX_THREADS 128 #endif
> > > > >
> > > > > Yep, that's better, though it would still require user to
> > > > > rebuild the code if he would like to increase total number of threads
> supported.
> > > > Agree
> > > >
> > > > > Though it seems relatively simply to extend current code to
> > > > > support dynamic max thread num here (2 variable arrays plus
> > > > > shift value plus
> > > mask).
> > > > Agree, supporting dynamic 'max thread num' is simple. But this
> > > > means memory needs to be allocated to the arrays. The API
> > > > 'rte_rcu_qsbr_init' has to take max thread num as the parameter.
> > > > We also
> > > have to introduce another API to free this memory. This will become
> > > very similar to alloc/free APIs I had in the v1.
> > > > I hope I am following you well, please correct me if not.
> > >
> > > I think we can still leave alloc/free tasks to the user.
> > > We probabply just need extra function rte_rcu_qsbr_size(uint32_
> > > max_threads) to help user calculate required size.
> > > rte_rcu_qsbr_init() might take as an additional parameter 'size' to
> > > make checks.
> > The size is returned by an API provided by the library. Why does it
> > need to be validated again? If 'size' is required for rte_rcu_qsbr_init, it
> could calculate it again.
>
> Just as extra-safety check.
> I don't have strong opinion here - if you think it is overkill, let's drop it.
>
>
> >
> > > Thought about something like that:
> > >
> > > size_t sz = rte_rcu_qsbr_size(max_threads); struct rte_rcu_qsbr
> > > *qsbr = alloc_aligned(CACHE_LINE, sz); rte_rcu_qsbr_init(qsbr,
> max_threads, sz); ...
> > >
> > Do you see any advantage for allowing the user to allocate the memory?
> So user can choose where to allocate the memory (eal malloc, normal malloc,
> stack, something else).
> Again user might decide to make rcu part of some complex data structure - in
> that case he probably would like to allocate one big chunk of memory at once
> and then provide part of it for rcu.
> Or some other usage scenario that I can't predict.
>
I made this change and added performance tests similar to liburcu. With the dynamic memory allocation change the performance of rte_rcu_qsbr_update comes down by 42% - 45% and that of rte_rcu_qsbr_check also comes down by 133% on Arm platform. On x86 (E5-2660 v4 @ 2.00GHz), the results are mixed. rte_rcu_qsbr_update comes down by 15%, but that of rte_rcu_qsbr_check improves.
On the Arm platform, the issue seems to be due to address calculation that needs to happen at run time. If I fix the reg_thread_id array size, I am getting back/improving the performance both for Arm and x86. What this means is, we will still have a max thread limitation, but it will be high - 512 (1 cache line). We could make this 1024 (2 cache lines). However, per thread counter data size will depend on the 'max thread' provided by the user. I think this solution serves your requirement (though with an acceptable constraint not affecting the near future), please let me know what you think.
These changes and the 3 variants of the implementation are present in RFC v3 [1], in case you want to run these tests.
1/5, 2/5 - same as RFC v2 + 1 bug fixed
3/5 - Addition of rte_rcu_qsbr_get_memsize. Memory size for register thread bitmap array as well as per thread counter data is calculated based on max_threads parameter
4/5 - Test cases are modified to use the new API
5/5 - Size of register thread bitmap array is fixed to hold 512 thread IDs. However, the per thread counter data is calculated based on max_threads parameter.
If you do not want to run the tests, you can just look at 3/5 and 5/5.
[1] http://patchwork.dpdk.org/cover/50431/
> > This approach requires the user to call 3 APIs (including memory
> > allocation). These 3 can be abstracted in a rte_rcu_qsbr_alloc API, user has
> to call just 1 API.
> >
> > > Konstantin
> > >
<snip>
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (5 preceding siblings ...)
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 0/2] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
` (3 more replies)
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (7 subsequent siblings)
14 siblings, 4 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 acesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencng D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the over head of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the over head on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Next Steps:
1) Update the cover letter to indicate the addition of rte_rcu_get_memsize
2) rte_rcu_qsbr_register_thread/rte_rcu_qsbr_unregister_thread can be
optimized to avoid accessing the common bitmap array. This is required
as these are data plane APIs. Plan is to introduce
rte_rcu_qsbr_thread_online/rte_rcu_qsbr_thread_offline which will not
touch the common bitmap array.
3) Add debug logs to enable debugging
4) Documentation
5) Convert to patch
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 927 ++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 494 ++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 99 ++
lib/librte_rcu/rte_rcu_qsbr.h | 511 ++++++++++
lib/librte_rcu/rte_rcu_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 2901 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-03-19 4:52 ` [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 1/3] rcu: " Honnappa Nagarahalli
` (2 subsequent siblings)
3 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 acesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencng D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the over head of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the over head on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Next Steps:
1) Update the cover letter to indicate the addition of rte_rcu_get_memsize
2) rte_rcu_qsbr_register_thread/rte_rcu_qsbr_unregister_thread can be
optimized to avoid accessing the common bitmap array. This is required
as these are data plane APIs. Plan is to introduce
rte_rcu_qsbr_thread_online/rte_rcu_qsbr_thread_offline which will not
touch the common bitmap array.
3) Add debug logs to enable debugging
4) Documentation
5) Convert to patch
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 927 ++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 494 ++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 99 ++
lib/librte_rcu/rte_rcu_qsbr.h | 511 ++++++++++
lib/librte_rcu/rte_rcu_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 2901 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-19 4:52 ` [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-22 16:42 ` Ananyev, Konstantin
2019-03-19 4:52 ` [dpdk-dev] [PATCH 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 99 ++++++
lib/librte_rcu/rte_rcu_qsbr.h | 511 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 662 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 452b8eb82..5827c1bbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1230,6 +1230,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 0b09a9348..d3557ff3c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -805,6 +805,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..b24a9363f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..0fc4515ea
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ RTE_ASSERT(max_threads == 0);
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ RTE_ASSERT(v == NULL);
+
+ memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
+ v->m_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ RTE_ASSERT(v == NULL || f == NULL);
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->m_threads));
+ fprintf(f, " Given # max threads = %u\n", v->m_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &RTE_QSBR_CNT_ARRAY_ELM(v, i)->cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..83943f751
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,511 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_RCU_MAX_THREADS 1024
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_ELEMS \
+ (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) / RTE_QSBR_THRID_ARRAY_ELM_SIZE)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_ARRAY_ELM(v, i) (((struct rte_rcu_qsbr_cnt *)(v + 1)) + i)
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/**
+ * RTE thread Quiescent State structure.
+ * Quiescent state counter array (array of 'struct rte_rcu_qsbr_cnt'),
+ * whose size is dependent on the maximum number of reader threads
+ * (m_threads) using this variable is stored immediately following
+ * this structure.
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple simultaneous QS queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t m_threads;
+ /**< Maximum number of threads this RCU variable will use */
+
+ uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS] __rte_cache_aligned;
+ /**< Registered thread IDs are stored in a bitmap array */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * Size of memory in bytes required for this QS variable.
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting QS on this variable.
+ *
+ */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its QS using rte_rcu_qsbr_update.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Release the new register thread ID to other threads
+ * calling rte_rcu_qsbr_check.
+ */
+ __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing QS queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure the removal of the thread from the list of
+ * reporting threads is visible before the thread
+ * does anything else.
+ */
+ __atomic_fetch_and(&v->reg_thread_id[i],
+ ~(1UL << id), __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_update. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * TQS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || t == NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same RCU variable, it must update the QS status, before calling
+ * this API.
+ *
+ * 2) In addition, while calling from multiple threads, more than
+ * one of those threads cannot be reporting the QS status on the
+ * same RCU variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state 'n' number of times
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t cnt;
+
+ RTE_ASSERT(v == NULL);
+
+ i = 0;
+ do {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id+j);
+ cnt = __atomic_load_n(
+ &RTE_QSBR_CNT_ARRAY_ELM(v, id + j)->cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait,
+ RTE_QSBR_CNT_ARRAY_ELM(v, id + j)->cnt, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(cnt != RTE_QSBR_CNT_THR_OFFLINE &&
+ cnt < t)) {
+ /* This thread is not in QS */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(
+ &v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+
+ i++;
+ } while (i < v->num_elems);
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, more than one of
+ * those threads cannot be reporting the QS status on the same
+ * RCU variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report QS on
+ * this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its QS status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_update(v, thread_id);
+
+ /* Wait for other readers to enter QS */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..019560adf
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..3feb44b75 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..2de0b5fc6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-19 4:52 ` [dpdk-dev] [PATCH 1/3] rcu: " Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-22 16:42 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 99 ++++++
lib/librte_rcu/rte_rcu_qsbr.h | 511 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 662 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 452b8eb82..5827c1bbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1230,6 +1230,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 0b09a9348..d3557ff3c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -805,6 +805,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..b24a9363f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..0fc4515ea
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ RTE_ASSERT(max_threads == 0);
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ RTE_ASSERT(v == NULL);
+
+ memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
+ v->m_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ RTE_ASSERT(v == NULL || f == NULL);
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->m_threads));
+ fprintf(f, " Given # max threads = %u\n", v->m_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &RTE_QSBR_CNT_ARRAY_ELM(v, i)->cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..83943f751
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,511 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_RCU_MAX_THREADS 1024
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_ELEMS \
+ (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) / RTE_QSBR_THRID_ARRAY_ELM_SIZE)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_ARRAY_ELM(v, i) (((struct rte_rcu_qsbr_cnt *)(v + 1)) + i)
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/**
+ * RTE thread Quiescent State structure.
+ * Quiescent state counter array (array of 'struct rte_rcu_qsbr_cnt'),
+ * whose size is dependent on the maximum number of reader threads
+ * (m_threads) using this variable is stored immediately following
+ * this structure.
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple simultaneous QS queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t m_threads;
+ /**< Maximum number of threads this RCU variable will use */
+
+ uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS] __rte_cache_aligned;
+ /**< Registered thread IDs are stored in a bitmap array */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * Size of memory in bytes required for this QS variable.
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting QS on this variable.
+ *
+ */
+void __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its QS using rte_rcu_qsbr_update.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Release the new register thread ID to other threads
+ * calling rte_rcu_qsbr_check.
+ */
+ __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing QS queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure the removal of the thread from the list of
+ * reporting threads is visible before the thread
+ * does anything else.
+ */
+ __atomic_fetch_and(&v->reg_thread_id[i],
+ ~(1UL << id), __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_update. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * TQS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || t == NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_update(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same RCU variable, it must update the QS status, before calling
+ * this API.
+ *
+ * 2) In addition, while calling from multiple threads, more than
+ * one of those threads cannot be reporting the QS status on the
+ * same RCU variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state 'n' number of times
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t cnt;
+
+ RTE_ASSERT(v == NULL);
+
+ i = 0;
+ do {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id+j);
+ cnt = __atomic_load_n(
+ &RTE_QSBR_CNT_ARRAY_ELM(v, id + j)->cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait,
+ RTE_QSBR_CNT_ARRAY_ELM(v, id + j)->cnt, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(cnt != RTE_QSBR_CNT_THR_OFFLINE &&
+ cnt < t)) {
+ /* This thread is not in QS */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(
+ &v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+
+ i++;
+ } while (i < v->num_elems);
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, more than one of
+ * those threads cannot be reporting the QS status on the same
+ * RCU variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report QS on
+ * this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v == NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its QS status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_update(v, thread_id);
+
+ /* Wait for other readers to enter QS */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ */
+void __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..019560adf
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..3feb44b75 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..2de0b5fc6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 2/3] test/rcu_qsbr: add API and functional tests
2019-03-19 4:52 ` [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 1/3] rcu: " Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 927 ++++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++++++++++++
5 files changed, 1562 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 89949c2bb..6b6dfefc2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -213,6 +213,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 5f87bb94d..c26ec889c 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -694,6 +694,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 05e5ddeb0..4df8e337b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -107,6 +107,8 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -132,7 +134,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -171,6 +174,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'string_autotest',
@@ -236,6 +240,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
]
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..3853934d2
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,927 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+
+ /* Test with enabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_update(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_update(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_update(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_update(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_update(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_update(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_update(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_update(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_update(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_update(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..049f8e371
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 2/3] test/rcu_qsbr: add API and functional tests
2019-03-19 4:52 ` [dpdk-dev] [PATCH 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 927 ++++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++++++++++++
5 files changed, 1562 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 89949c2bb..6b6dfefc2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -213,6 +213,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 5f87bb94d..c26ec889c 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -694,6 +694,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 05e5ddeb0..4df8e337b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -107,6 +107,8 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -132,7 +134,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -171,6 +174,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'string_autotest',
@@ -236,6 +240,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
]
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..3853934d2
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,927 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+
+ /* Test with enabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_update(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_update(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_update(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_update(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_update(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_update(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_update(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_update(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_update(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_update(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..049f8e371
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_update(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
2019-03-19 4:52 ` [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-03-19 4:52 ` [dpdk-dev] [PATCH 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-25 11:34 ` Kovacevic, Marko
3 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 494 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 +++++++
5 files changed, 677 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..5c1f6b477 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..0b4c248a2 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..3ae53bdc2
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,494 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#333e48;font-family:Calibri;font-size:2.11672em}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.20955em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st13 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st14 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.16666em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-240);marker-start:url(#mrkr5-238);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-249);marker-start:url(#mrkr5-247);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-240);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-238" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-240" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-247" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-249" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(240.681,-1012.22)">
+ <title>Sheet.3</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L88.45 1148.81 C92.07 1148.81 94.97 1151.76 94.97 1155.34 L94.97
+ 1181.47 C94.97 1185.1 92.07 1188 88.45 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(335.653,-1010.77)">
+ <title>Sheet.4</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(377.387,-1014.99)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(443.674,-1010.77)">
+ <title>Sheet.6</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(480.648,-1011.5)">
+ <title>Sheet.7</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(522.382,-1015.49)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(587.22,-1011.5)">
+ <title>Sheet.9</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-1016.39)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="104.097" cy="1174.93" width="208.2" height="26.1302"/>
+ <path d="M208.19 1161.87 L0 1161.87 L0 1188 L208.19 1188 L208.19 1161.87" class="st3"/>
+ <text x="16.59" y="1181.47" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(277.655,-952.713)">
+ <title>Sheet.11</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L88.45 1148.81 C92.07 1148.81 94.97 1151.76 94.97 1155.34 L94.97
+ 1181.47 C94.97 1185.1 92.07 1188 88.45 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(372.627,-951.261)">
+ <title>Sheet.12</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(414.386,-955.425)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(480.648,-951.261)">
+ <title>Sheet.14</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(517.622,-951.987)">
+ <title>Sheet.15</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(559.381,-955.926)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(624.194,-951.987)">
+ <title>Sheet.17</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(109.808,-959.83)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="19.2022" cy="1174.93" width="38.41" height="26.1302"/>
+ <path d="M38.4 1161.87 L0 1161.87 L0 1188 L38.4 1188 L38.4 1161.87" class="st3"/>
+ <text x="5.52" y="1181.47" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(345.278,-891.751)">
+ <title>Sheet.19</title>
+ <path d="M0 1155.98 C0 1152.44 2.9 1149.54 6.43 1149.54 L88.58 1149.54 C92.12 1149.54 94.97 1152.44 94.97 1155.98 L94.97
+ 1181.6 C94.97 1185.14 92.12 1188 88.58 1188 L6.43 1188 C2.9 1188 0 1185.14 0 1181.6 L0 1155.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(440.975,-890.3)">
+ <title>Sheet.20</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(482.409,-894.363)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(548.996,-890.3)">
+ <title>Sheet.22</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(585.97,-891.025)">
+ <title>Sheet.23</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(627.404,-894.864)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(692.542,-891.025)">
+ <title>Sheet.25</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(109.308,-898.768)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="19.2022" cy="1174.93" width="38.41" height="26.1302"/>
+ <path d="M38.4 1161.87 L0 1161.87 L0 1188 L38.4 1188 L38.4 1161.87" class="st3"/>
+ <text x="5.52" y="1181.47" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(636.118,-747)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="32.7073" cy="1172.76" width="65.42" height="30.4844"/>
+ <path d="M65.41 1157.52 L0 1157.52 L0 1188 L65.41 1188 L65.41 1157.52" class="st3"/>
+ <text x="7.14" y="1180.38" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(306.386,-808.107)">
+ <title>Sheet.29</title>
+ <path d="M0 1157.52 L0 1188 L0 1157.52" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(306.386,-825.66)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L58.86 1187.55 L107.61 1176.66" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(162,-808.107)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="68.8761" cy="1174.8" width="137.76" height="26.4087"/>
+ <path d="M137.75 1161.59 L0 1161.59 L0 1188 L137.75 1188 L137.75 1161.59" class="st3"/>
+ <text x="5.63" y="1170.44" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="75.92" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(414.386,-823.2)">
+ <title>Sheet.33</title>
+ <path d="M0 868.2 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(374.334,-1143)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="43.2194" cy="1172.76" width="86.44" height="30.4844"/>
+ <path d="M86.44 1157.52 L0 1157.52 L0 1188 L86.44 1188 L86.44 1157.52" class="st3"/>
+ <text x="8.51" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(316.939,-1118.91)">
+ <title>Sheet.35</title>
+ <path d="M0 1164.05 L0 1188 L0 1164.05" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(316.939,-1121.45)">
+ <title>Sheet.36</title>
+ <path d="M0 1176.52 L60.17 1176.07 L97.1 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(158.718,-1119.3)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="76.724" cy="1179.29" width="153.45" height="17.4211"/>
+ <path d="M153.45 1170.58 L0 1170.58 L0 1188 L153.45 1188 L153.45 1170.58" class="st3"/>
+ <text x="0.09" y="1183.64" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(516.172,-819)">
+ <title>Sheet.38</title>
+ <path d="M0 864 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(574.306,-792.296)">
+ <title>Sheet.39</title>
+ <path d="M0 1134.3 L0 1188 L0 1134.3" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(517.939,-819)">
+ <title>Sheet.40</title>
+ <path d="M56.37 1188 L37.52 1187.95 L0 1158.97" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(579.283,-793.278)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="122.717" cy="1162.74" width="245.44" height="50.5152"/>
+ <path d="M245.43 1137.48 L0 1137.48 L0 1188 L245.43 1188 L245.43 1137.48" class="st3"/>
+ <text x="0" y="1149.68" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(492.33,-1143)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="29.9047" cy="1172.76" width="59.81" height="30.4844"/>
+ <path d="M59.81 1157.52 L0 1157.52 L0 1188 L59.81 1188 L59.81 1157.52" class="st3"/>
+ <text x="6.77" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(585,-1128.34)">
+ <title>Sheet.48</title>
+ <path d="M0 1157.52 L0 1188 L0 1157.52" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(476.536,-1109.79)">
+ <title>Sheet.49</title>
+ <path d="M108.93 1160.69 L80.93 1160.65 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(576,-1128.63)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="76.724" cy="1173.63" width="153.45" height="28.7428"/>
+ <path d="M153.45 1159.26 L0 1159.26 L0 1188 L153.45 1188 L153.45 1159.26" class="st3"/>
+ <text x="12.72" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(434.25,-810)">
+ <title>Sheet.51</title>
+ <path d="M0 864 L0 1188" class="st13"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(338.689,-1087.7)">
+ <title>Sheet.52</title>
+ <path d="M0 1164.05 L0 1188 L0 1164.05" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(338.689,-1090.24)">
+ <title>Sheet.53</title>
+ <path d="M0 1176.52 L60.17 1176.07 L97.1 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(180.467,-1088.46)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="76.724" cy="1179.29" width="153.45" height="17.4211"/>
+ <path d="M153.45 1170.58 L0 1170.58 L0 1188 L153.45 1188 L153.45 1170.58" class="st3"/>
+ <text x="0.09" y="1183.64" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(513.997,-810)">
+ <title>Sheet.56</title>
+ <path d="M0 864 L0 1188" class="st13"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(481.011,-1082.26)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L104.45 1134.58" class="st14"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(448.387,-942.326)">
+ <title>Sheet.58</title>
+ <path d="M307.61 1185.32 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(747,-934.257)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="92.5251" cy="1173.63" width="185.06" height="28.7428"/>
+ <path d="M185.05 1159.26 L0 1159.26 L0 1188 L185.05 1188 L185.05 1159.26" class="st3"/>
+ <text x="14.78" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(450.199,-942.417)">
+ <title>Sheet.60</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(594.47,-943.142)">
+ <title>Sheet.61</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(254.002,-1002.43)">
+ <title>Sheet.62</title>
+ <path d="M502 1188 L0 1187.59" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(747,-990)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="97.9986" cy="1173.63" width="196" height="28.7428"/>
+ <path d="M196 1159.26 L0 1159.26 L0 1188 L196 1188 L196 1159.26" class="st3"/>
+ <text x="15.49" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(254.455,-1001.93)">
+ <title>Sheet.64</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(450.199,-1002.65)">
+ <title>Sheet.65</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(617.669,-1003.38)">
+ <title>Sheet.66</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(344.304,-876.625)">
+ <title>Sheet.67</title>
+ <path d="M411.7 1188 L0 1187.59" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(344.757,-876.126)">
+ <title>Sheet.68</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(731.524,-877.578)">
+ <title>Sheet.69</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(762.248,-864)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="68.8761" cy="1173.63" width="137.76" height="28.7428"/>
+ <path d="M137.75 1159.26 L0 1159.26 L0 1188 L137.75 1188 L137.75 1159.26" class="st3"/>
+ <text x="3.14" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(143.026,-693.87)">
+ <title>Sheet.71</title>
+ <path d="M0 1165.98 C0 1163.71 2.76 1161.87 6.16 1161.87 L30.81 1161.87 C34.26 1161.87 36.97 1163.71 36.97 1165.98 L36.97
+ 1183.89 C36.97 1186.19 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1186.19 0 1183.89 L0 1165.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(192.124,-693.591)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="209.938" cy="1174.8" width="419.88" height="26.4087"/>
+ <path d="M419.88 1161.59 L0 1161.59 L0 1188 L419.88 1188 L419.88 1161.59" class="st3"/>
+ <text x="0" y="1170.44" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(144.703,-648.87)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="17.6483" cy="1174.93" width="35.3" height="26.1302"/>
+ <path d="M0 1166.22 C0 1163.84 0.97 1161.87 2.15 1161.87 L33.15 1161.87 C34.34 1161.87 35.3 1163.84 35.3 1166.22 L35.3
+ 1183.64 C35.3 1186.06 34.34 1188 33.15 1188 L2.15 1188 C0.97 1188 0 1186.06 0 1183.64 L0 1166.22 Z"
+ class="st2"/>
+ <text x="10.02" y="1179.14" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(192.124,-648.591)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="209.938" cy="1174.8" width="419.88" height="26.4087"/>
+ <path d="M419.88 1161.59 L0 1161.59 L0 1188 L419.88 1188 L419.88 1161.59" class="st3"/>
+ <text x="0" y="1170.44" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(237.124,-603)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="182.938" cy="1168.41" width="365.88" height="39.1897"/>
+ <path d="M365.88 1148.81 L0 1148.81 L0 1188 L365.88 1188 L365.88 1148.81" class="st3"/>
+ <text x="0" y="1164.05" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is <tspan
+ x="0" dy="1.2em" class="st9">removed using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(135,-612)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="43.2194" cy="1172.76" width="86.44" height="30.4844"/>
+ <path d="M86.44 1157.52 L0 1157.52 L0 1188 L86.44 1188 L86.44 1157.52" class="st3"/>
+ <text x="8.51" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(236.4,-561.629)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="189.688" cy="1173.63" width="379.38" height="28.7428"/>
+ <path d="M379.38 1159.26 L0 1159.26 L0 1188 L379.38 1188 L379.38 1159.26" class="st3"/>
+ <text x="0" y="1169.27" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted <tspan
+ x="0" dy="1.2em" class="st9">entry.</tspan></text> </g>
+ <g id="shape78-218" v:mID="78" v:groupContext="shape" transform="translate(135,-565.56)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="32.7073" cy="1172.76" width="65.42" height="30.4844"/>
+ <path d="M65.41 1157.52 L0 1157.52 L0 1188 L65.41 1188 L65.41 1157.52" class="st3"/>
+ <text x="9.58" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-222" v:mID="79" v:groupContext="shape" transform="translate(237.274,-516.629)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="189.688" cy="1173.63" width="379.38" height="28.7428"/>
+ <path d="M379.38 1159.26 L0 1159.26 L0 1188 L379.38 1188 L379.38 1159.26" class="st3"/>
+ <text x="0" y="1169.27" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during <tspan
+ x="0" dy="1.2em" class="st9">which memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-227" v:mID="80" v:groupContext="shape" transform="translate(144.15,-509.516)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="43.2194" cy="1172.76" width="86.44" height="30.4844"/>
+ <path d="M86.44 1157.52 L0 1157.52 L0 1188 L86.44 1188 L86.44 1157.52" class="st3"/>
+ <text x="0" y="1165.14" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-232" v:mID="83" v:groupContext="shape" transform="translate(414.997,-1107)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L91.51 1188" class="st16"/>
+ </g>
+ <g id="shape84-241" v:mID="84" v:groupContext="shape" transform="translate(434.25,-1080)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L71.98 1188" class="st18"/>
+ </g>
+ <g id="shape85-250" v:mID="85" v:groupContext="shape" transform="translate(701.532,-765)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L62.81 1188" class="st20"/>
+ </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..6fb3fb921 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..5155dd35c
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is upto the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 acesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencng D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factores affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the over head of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles(due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the over head on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application has to allocate memory and initialize a QS variable.
+
+Application can call **rte_rcu_qsbr_get_memsize** to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+**rte_rcu_qsbr_init**.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+**rte_rcu_qsbr_thread_register** API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call **rte_rcu_qsbr_thread_online** API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call **rte_rcu_qsbr_thread_offline** API, before calling
+blocking APIs. It can call **rte_rcu_qsbr_thread_online** API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API **rte_rcu_qsbr_start**. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+**rte_rcu_qsbr_start** returns a token to each caller.
+
+The writer thread has to call **rte_rcu_qsbr_check** API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs **rte_rcu_qsbr_start** and **rte_rcu_qsbr_check** are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+**rte_rcu_qsbr_synchronize** API combines the functionality of
+**rte_rcu_qsbr_start** and blocking **rte_rcu_qsbr_check** into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and also
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call **rte_rcu_qsbr_thread_offline** and
+**rte_rcu_qsbr_thread_unregister** APIs to remove itself from reporting its
+quiescent state. The **rte_rcu_qsbr_check** API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call **rte_rcu_qsbr_update** API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
2019-03-19 4:52 ` [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-25 11:34 ` Kovacevic, Marko
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-19 4:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 494 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 +++++++
5 files changed, 677 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..5c1f6b477 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..0b4c248a2 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..3ae53bdc2
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,494 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#333e48;font-family:Calibri;font-size:2.11672em}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.20955em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st13 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st14 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.16666em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-240);marker-start:url(#mrkr5-238);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-249);marker-start:url(#mrkr5-247);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-240);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-238" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-240" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-247" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-249" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(240.681,-1012.22)">
+ <title>Sheet.3</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L88.45 1148.81 C92.07 1148.81 94.97 1151.76 94.97 1155.34 L94.97
+ 1181.47 C94.97 1185.1 92.07 1188 88.45 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(335.653,-1010.77)">
+ <title>Sheet.4</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(377.387,-1014.99)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(443.674,-1010.77)">
+ <title>Sheet.6</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(480.648,-1011.5)">
+ <title>Sheet.7</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(522.382,-1015.49)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(587.22,-1011.5)">
+ <title>Sheet.9</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-1016.39)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="104.097" cy="1174.93" width="208.2" height="26.1302"/>
+ <path d="M208.19 1161.87 L0 1161.87 L0 1188 L208.19 1188 L208.19 1161.87" class="st3"/>
+ <text x="16.59" y="1181.47" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(277.655,-952.713)">
+ <title>Sheet.11</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L88.45 1148.81 C92.07 1148.81 94.97 1151.76 94.97 1155.34 L94.97
+ 1181.47 C94.97 1185.1 92.07 1188 88.45 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(372.627,-951.261)">
+ <title>Sheet.12</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(414.386,-955.425)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(480.648,-951.261)">
+ <title>Sheet.14</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(517.622,-951.987)">
+ <title>Sheet.15</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(559.381,-955.926)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(624.194,-951.987)">
+ <title>Sheet.17</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(109.808,-959.83)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="19.2022" cy="1174.93" width="38.41" height="26.1302"/>
+ <path d="M38.4 1161.87 L0 1161.87 L0 1188 L38.4 1188 L38.4 1161.87" class="st3"/>
+ <text x="5.52" y="1181.47" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(345.278,-891.751)">
+ <title>Sheet.19</title>
+ <path d="M0 1155.98 C0 1152.44 2.9 1149.54 6.43 1149.54 L88.58 1149.54 C92.12 1149.54 94.97 1152.44 94.97 1155.98 L94.97
+ 1181.6 C94.97 1185.14 92.12 1188 88.58 1188 L6.43 1188 C2.9 1188 0 1185.14 0 1181.6 L0 1155.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(440.975,-890.3)">
+ <title>Sheet.20</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(482.409,-894.363)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(548.996,-890.3)">
+ <title>Sheet.22</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(585.97,-891.025)">
+ <title>Sheet.23</title>
+ <path d="M0 1155.34 C0 1151.76 2.95 1148.81 6.52 1148.81 L100.77 1148.81 C104.4 1148.81 107.3 1151.76 107.3 1155.34 L107.3
+ 1181.47 C107.3 1185.1 104.4 1188 100.77 1188 L6.52 1188 C2.95 1188 0 1185.1 0 1181.47 L0 1155.34 Z"
+ class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(627.404,-894.864)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="16.8796" cy="1174.93" width="33.76" height="26.1305"/>
+ <path d="M33.76 1161.87 L0 1161.87 L0 1188 L33.76 1188 L33.76 1161.87" class="st3"/>
+ <text x="4.66" y="1181.47" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(692.542,-891.025)">
+ <title>Sheet.25</title>
+ <path d="M0 1154.98 C0 1151.58 2.76 1148.81 6.16 1148.81 L30.81 1148.81 C34.26 1148.81 36.97 1151.58 36.97 1154.98 L36.97
+ 1181.83 C36.97 1185.28 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1185.28 0 1181.83 L0 1154.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(109.308,-898.768)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="19.2022" cy="1174.93" width="38.41" height="26.1302"/>
+ <path d="M38.4 1161.87 L0 1161.87 L0 1188 L38.4 1188 L38.4 1161.87" class="st3"/>
+ <text x="5.52" y="1181.47" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(636.118,-747)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="32.7073" cy="1172.76" width="65.42" height="30.4844"/>
+ <path d="M65.41 1157.52 L0 1157.52 L0 1188 L65.41 1188 L65.41 1157.52" class="st3"/>
+ <text x="7.14" y="1180.38" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(306.386,-808.107)">
+ <title>Sheet.29</title>
+ <path d="M0 1157.52 L0 1188 L0 1157.52" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(306.386,-825.66)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L58.86 1187.55 L107.61 1176.66" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(162,-808.107)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="68.8761" cy="1174.8" width="137.76" height="26.4087"/>
+ <path d="M137.75 1161.59 L0 1161.59 L0 1188 L137.75 1188 L137.75 1161.59" class="st3"/>
+ <text x="5.63" y="1170.44" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="75.92" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(414.386,-823.2)">
+ <title>Sheet.33</title>
+ <path d="M0 868.2 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(374.334,-1143)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="43.2194" cy="1172.76" width="86.44" height="30.4844"/>
+ <path d="M86.44 1157.52 L0 1157.52 L0 1188 L86.44 1188 L86.44 1157.52" class="st3"/>
+ <text x="8.51" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(316.939,-1118.91)">
+ <title>Sheet.35</title>
+ <path d="M0 1164.05 L0 1188 L0 1164.05" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(316.939,-1121.45)">
+ <title>Sheet.36</title>
+ <path d="M0 1176.52 L60.17 1176.07 L97.1 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(158.718,-1119.3)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="76.724" cy="1179.29" width="153.45" height="17.4211"/>
+ <path d="M153.45 1170.58 L0 1170.58 L0 1188 L153.45 1188 L153.45 1170.58" class="st3"/>
+ <text x="0.09" y="1183.64" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(516.172,-819)">
+ <title>Sheet.38</title>
+ <path d="M0 864 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(574.306,-792.296)">
+ <title>Sheet.39</title>
+ <path d="M0 1134.3 L0 1188 L0 1134.3" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(517.939,-819)">
+ <title>Sheet.40</title>
+ <path d="M56.37 1188 L37.52 1187.95 L0 1158.97" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(579.283,-793.278)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="122.717" cy="1162.74" width="245.44" height="50.5152"/>
+ <path d="M245.43 1137.48 L0 1137.48 L0 1188 L245.43 1188 L245.43 1137.48" class="st3"/>
+ <text x="0" y="1149.68" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(492.33,-1143)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="29.9047" cy="1172.76" width="59.81" height="30.4844"/>
+ <path d="M59.81 1157.52 L0 1157.52 L0 1188 L59.81 1188 L59.81 1157.52" class="st3"/>
+ <text x="6.77" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(585,-1128.34)">
+ <title>Sheet.48</title>
+ <path d="M0 1157.52 L0 1188 L0 1157.52" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(476.536,-1109.79)">
+ <title>Sheet.49</title>
+ <path d="M108.93 1160.69 L80.93 1160.65 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(576,-1128.63)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="76.724" cy="1173.63" width="153.45" height="28.7428"/>
+ <path d="M153.45 1159.26 L0 1159.26 L0 1188 L153.45 1188 L153.45 1159.26" class="st3"/>
+ <text x="12.72" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(434.25,-810)">
+ <title>Sheet.51</title>
+ <path d="M0 864 L0 1188" class="st13"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(338.689,-1087.7)">
+ <title>Sheet.52</title>
+ <path d="M0 1164.05 L0 1188 L0 1164.05" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(338.689,-1090.24)">
+ <title>Sheet.53</title>
+ <path d="M0 1176.52 L60.17 1176.07 L97.1 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(180.467,-1088.46)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="76.724" cy="1179.29" width="153.45" height="17.4211"/>
+ <path d="M153.45 1170.58 L0 1170.58 L0 1188 L153.45 1188 L153.45 1170.58" class="st3"/>
+ <text x="0.09" y="1183.64" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(513.997,-810)">
+ <title>Sheet.56</title>
+ <path d="M0 864 L0 1188" class="st13"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(481.011,-1082.26)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L104.45 1134.58" class="st14"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(448.387,-942.326)">
+ <title>Sheet.58</title>
+ <path d="M307.61 1185.32 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(747,-934.257)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="92.5251" cy="1173.63" width="185.06" height="28.7428"/>
+ <path d="M185.05 1159.26 L0 1159.26 L0 1188 L185.05 1188 L185.05 1159.26" class="st3"/>
+ <text x="14.78" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(450.199,-942.417)">
+ <title>Sheet.60</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(594.47,-943.142)">
+ <title>Sheet.61</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(254.002,-1002.43)">
+ <title>Sheet.62</title>
+ <path d="M502 1188 L0 1187.59" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(747,-990)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="97.9986" cy="1173.63" width="196" height="28.7428"/>
+ <path d="M196 1159.26 L0 1159.26 L0 1188 L196 1188 L196 1159.26" class="st3"/>
+ <text x="15.49" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(254.455,-1001.93)">
+ <title>Sheet.64</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(450.199,-1002.65)">
+ <title>Sheet.65</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(617.669,-1003.38)">
+ <title>Sheet.66</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(344.304,-876.625)">
+ <title>Sheet.67</title>
+ <path d="M411.7 1188 L0 1187.59" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(344.757,-876.126)">
+ <title>Sheet.68</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(731.524,-877.578)">
+ <title>Sheet.69</title>
+ <path d="M0 1177.7 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(762.248,-864)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="68.8761" cy="1173.63" width="137.76" height="28.7428"/>
+ <path d="M137.75 1159.26 L0 1159.26 L0 1188 L137.75 1188 L137.75 1159.26" class="st3"/>
+ <text x="3.14" y="1180.81" class="st12" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(143.026,-693.87)">
+ <title>Sheet.71</title>
+ <path d="M0 1165.98 C0 1163.71 2.76 1161.87 6.16 1161.87 L30.81 1161.87 C34.26 1161.87 36.97 1163.71 36.97 1165.98 L36.97
+ 1183.89 C36.97 1186.19 34.26 1188 30.81 1188 L6.16 1188 C2.76 1188 0 1186.19 0 1183.89 L0 1165.98 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(192.124,-693.591)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="209.938" cy="1174.8" width="419.88" height="26.4087"/>
+ <path d="M419.88 1161.59 L0 1161.59 L0 1188 L419.88 1188 L419.88 1161.59" class="st3"/>
+ <text x="0" y="1170.44" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(144.703,-648.87)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="17.6483" cy="1174.93" width="35.3" height="26.1302"/>
+ <path d="M0 1166.22 C0 1163.84 0.97 1161.87 2.15 1161.87 L33.15 1161.87 C34.34 1161.87 35.3 1163.84 35.3 1166.22 L35.3
+ 1183.64 C35.3 1186.06 34.34 1188 33.15 1188 L2.15 1188 C0.97 1188 0 1186.06 0 1183.64 L0 1166.22 Z"
+ class="st2"/>
+ <text x="10.02" y="1179.14" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(192.124,-648.591)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="209.938" cy="1174.8" width="419.88" height="26.4087"/>
+ <path d="M419.88 1161.59 L0 1161.59 L0 1188 L419.88 1188 L419.88 1161.59" class="st3"/>
+ <text x="0" y="1170.44" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(237.124,-603)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="182.938" cy="1168.41" width="365.88" height="39.1897"/>
+ <path d="M365.88 1148.81 L0 1148.81 L0 1188 L365.88 1188 L365.88 1148.81" class="st3"/>
+ <text x="0" y="1164.05" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is <tspan
+ x="0" dy="1.2em" class="st9">removed using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(135,-612)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="43.2194" cy="1172.76" width="86.44" height="30.4844"/>
+ <path d="M86.44 1157.52 L0 1157.52 L0 1188 L86.44 1188 L86.44 1157.52" class="st3"/>
+ <text x="8.51" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(236.4,-561.629)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="189.688" cy="1173.63" width="379.38" height="28.7428"/>
+ <path d="M379.38 1159.26 L0 1159.26 L0 1188 L379.38 1188 L379.38 1159.26" class="st3"/>
+ <text x="0" y="1169.27" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted <tspan
+ x="0" dy="1.2em" class="st9">entry.</tspan></text> </g>
+ <g id="shape78-218" v:mID="78" v:groupContext="shape" transform="translate(135,-565.56)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="32.7073" cy="1172.76" width="65.42" height="30.4844"/>
+ <path d="M65.41 1157.52 L0 1157.52 L0 1188 L65.41 1188 L65.41 1157.52" class="st3"/>
+ <text x="9.58" y="1180.38" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-222" v:mID="79" v:groupContext="shape" transform="translate(237.274,-516.629)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="189.688" cy="1173.63" width="379.38" height="28.7428"/>
+ <path d="M379.38 1159.26 L0 1159.26 L0 1188 L379.38 1188 L379.38 1159.26" class="st3"/>
+ <text x="0" y="1169.27" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during <tspan
+ x="0" dy="1.2em" class="st9">which memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-227" v:mID="80" v:groupContext="shape" transform="translate(144.15,-509.516)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="43.2194" cy="1172.76" width="86.44" height="30.4844"/>
+ <path d="M86.44 1157.52 L0 1157.52 L0 1188 L86.44 1188 L86.44 1157.52" class="st3"/>
+ <text x="0" y="1165.14" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-232" v:mID="83" v:groupContext="shape" transform="translate(414.997,-1107)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L91.51 1188" class="st16"/>
+ </g>
+ <g id="shape84-241" v:mID="84" v:groupContext="shape" transform="translate(434.25,-1080)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L71.98 1188" class="st18"/>
+ </g>
+ <g id="shape85-250" v:mID="85" v:groupContext="shape" transform="translate(701.532,-765)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L62.81 1188" class="st20"/>
+ </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..6fb3fb921 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..5155dd35c
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is upto the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 acesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencng D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factores affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the over head of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles(due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the over head on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application has to allocate memory and initialize a QS variable.
+
+Application can call **rte_rcu_qsbr_get_memsize** to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+**rte_rcu_qsbr_init**.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+**rte_rcu_qsbr_thread_register** API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call **rte_rcu_qsbr_thread_online** API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call **rte_rcu_qsbr_thread_offline** API, before calling
+blocking APIs. It can call **rte_rcu_qsbr_thread_online** API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API **rte_rcu_qsbr_start**. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+**rte_rcu_qsbr_start** returns a token to each caller.
+
+The writer thread has to call **rte_rcu_qsbr_check** API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs **rte_rcu_qsbr_start** and **rte_rcu_qsbr_check** are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+**rte_rcu_qsbr_synchronize** API combines the functionality of
+**rte_rcu_qsbr_start** and blocking **rte_rcu_qsbr_check** into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and also
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call **rte_rcu_qsbr_thread_offline** and
+**rte_rcu_qsbr_thread_unregister** APIs to remove itself from reporting its
+quiescent state. The **rte_rcu_qsbr_check** API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call **rte_rcu_qsbr_update** API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-19 4:52 ` [dpdk-dev] [PATCH 1/3] rcu: " Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
@ 2019-03-22 16:42 ` Ananyev, Konstantin
2019-03-22 16:42 ` Ananyev, Konstantin
2019-03-26 4:35 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-03-22 16:42 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
Hi Honnappa,
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..0fc4515ea
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,99 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + RTE_ASSERT(max_threads == 0);
Here and in all similar places:
assert() will abort when its condition will be evaluated to false.
So it should be max_threads != 0.
Also it a public and non-datapath function.
Calling assert() for invalid input parameter - seems way too extreme.
Why not just return error to the caller?
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> +}
> +
> +/* Initialize a quiescent state variable */
> +void __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + RTE_ASSERT(v == NULL);
> +
> + memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
> + v->m_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +void __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t;
> +
> + RTE_ASSERT(v == NULL || f == NULL);
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->m_threads));
> + fprintf(f, " Given # max threads = %u\n", v->m_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu\n", t,
> + __atomic_load_n(
> + &RTE_QSBR_CNT_ARRAY_ELM(v, i)->cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..83943f751
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,511 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
Why do you need that?
Can't you use RTE_LOG_DP() instead?
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_RCU_MAX_THREADS 1024
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_ELEMS \
> + (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) / RTE_QSBR_THRID_ARRAY_ELM_SIZE)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_ARRAY_ELM(v, i) (((struct rte_rcu_qsbr_cnt *)(v + 1)) + i)
You can probably add
struct rte_rcu_qsbr_cnt cnt[0];
at the end of struct rte_rcu_qsbr, then wouldn't need macro above.
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/**
> + * RTE thread Quiescent State structure.
> + * Quiescent state counter array (array of 'struct rte_rcu_qsbr_cnt'),
> + * whose size is dependent on the maximum number of reader threads
> + * (m_threads) using this variable is stored immediately following
> + * this structure.
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple simultaneous QS queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t m_threads;
> + /**< Maximum number of threads this RCU variable will use */
> +
> + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS] __rte_cache_aligned;
> + /**< Registered thread IDs are stored in a bitmap array */
As I understand you ended up with fixed size array to avoid 2 variable size arrays in this struct?
Is that big penalty for register/unregister() to either store a pointer to bitmap, or calculate it based on num_elems value?
As another thought - do we really need bitmap at all?
Might it is possible to sotre register value for each thread inside it's rte_rcu_qsbr_cnt:
struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;} __rte_cache_aligned;
?
That would cause check() to walk through all elems in rte_rcu_qsbr_cnt array,
but from other side would help to avoid cache conflicts for register/unregister.
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * Size of memory in bytes required for this QS variable.
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting QS on this variable.
> + *
> + */
> +void __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its QS using rte_rcu_qsbr_update.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id;
> +
> + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Release the new register thread ID to other threads
> + * calling rte_rcu_qsbr_check.
> + */
> + __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing QS queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id;
> +
> + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure the removal of the thread from the list of
> + * reporting threads is visible before the thread
> + * does anything else.
> + */
> + __atomic_fetch_and(&v->reg_thread_id[i],
> + ~(1UL << id), __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_update. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
If it has to generate a proper memory-barrier here anyway,
could it use rte_smp_mb() here?
At least for IA it would generate more lightweight one.
Konstantin
> +}
> +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-22 16:42 ` Ananyev, Konstantin
@ 2019-03-22 16:42 ` Ananyev, Konstantin
2019-03-26 4:35 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-03-22 16:42 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
Hi Honnappa,
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..0fc4515ea
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,99 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + RTE_ASSERT(max_threads == 0);
Here and in all similar places:
assert() will abort when its condition will be evaluated to false.
So it should be max_threads != 0.
Also it a public and non-datapath function.
Calling assert() for invalid input parameter - seems way too extreme.
Why not just return error to the caller?
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> +}
> +
> +/* Initialize a quiescent state variable */
> +void __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + RTE_ASSERT(v == NULL);
> +
> + memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
> + v->m_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +void __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t;
> +
> + RTE_ASSERT(v == NULL || f == NULL);
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->m_threads));
> + fprintf(f, " Given # max threads = %u\n", v->m_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu\n", t,
> + __atomic_load_n(
> + &RTE_QSBR_CNT_ARRAY_ELM(v, i)->cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..83943f751
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,511 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
Why do you need that?
Can't you use RTE_LOG_DP() instead?
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_RCU_MAX_THREADS 1024
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_ELEMS \
> + (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) / RTE_QSBR_THRID_ARRAY_ELM_SIZE)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_ARRAY_ELM(v, i) (((struct rte_rcu_qsbr_cnt *)(v + 1)) + i)
You can probably add
struct rte_rcu_qsbr_cnt cnt[0];
at the end of struct rte_rcu_qsbr, then wouldn't need macro above.
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/**
> + * RTE thread Quiescent State structure.
> + * Quiescent state counter array (array of 'struct rte_rcu_qsbr_cnt'),
> + * whose size is dependent on the maximum number of reader threads
> + * (m_threads) using this variable is stored immediately following
> + * this structure.
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple simultaneous QS queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t m_threads;
> + /**< Maximum number of threads this RCU variable will use */
> +
> + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS] __rte_cache_aligned;
> + /**< Registered thread IDs are stored in a bitmap array */
As I understand you ended up with fixed size array to avoid 2 variable size arrays in this struct?
Is that big penalty for register/unregister() to either store a pointer to bitmap, or calculate it based on num_elems value?
As another thought - do we really need bitmap at all?
Might it is possible to sotre register value for each thread inside it's rte_rcu_qsbr_cnt:
struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;} __rte_cache_aligned;
?
That would cause check() to walk through all elems in rte_rcu_qsbr_cnt array,
but from other side would help to avoid cache conflicts for register/unregister.
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * Size of memory in bytes required for this QS variable.
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting QS on this variable.
> + *
> + */
> +void __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its QS using rte_rcu_qsbr_update.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id;
> +
> + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Release the new register thread ID to other threads
> + * calling rte_rcu_qsbr_check.
> + */
> + __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing QS queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id;
> +
> + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure the removal of the thread from the list of
> + * reporting threads is visible before the thread
> + * does anything else.
> + */
> + __atomic_fetch_and(&v->reg_thread_id[i],
> + ~(1UL << id), __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_update. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
If it has to generate a proper memory-barrier here anyway,
could it use rte_smp_mb() here?
At least for IA it would generate more lightweight one.
Konstantin
> +}
> +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
2019-03-19 4:52 ` [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
@ 2019-03-25 11:34 ` Kovacevic, Marko
2019-03-25 11:34 ` Kovacevic, Marko
2019-03-26 4:43 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Kovacevic, Marko @ 2019-03-25 11:34 UTC (permalink / raw)
To: Honnappa Nagarahalli, Ananyev, Konstantin, stephen, paulmck, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> Subject: [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
>
> Add lib_rcu QSBR API and programmer guide documentation.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>
> ---
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf.in | 1 +
> .../prog_guide/img/rcu_general_info.svg | 494 ++++++++++++++++++
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/rcu_lib.rst | 179 +++++++
> 5 files changed, 677 insertions(+), 1 deletion(-) create mode 100644
> doc/guides/prog_guide/img/rcu_general_info.svg
> create mode 100644 doc/guides/prog_guide/rcu_lib.rst
>
<...>
> diff --git a/doc/guides/prog_guide/index.rst
> b/doc/guides/prog_guide/index.rst index 6726b1e8d..6fb3fb921 100644
> --- a/doc/guides/prog_guide/index.rst
> +++ b/doc/guides/prog_guide/index.rst
> @@ -55,6 +55,7 @@ Programmer's Guide
> metrics_lib
> bpf_lib
> ipsec_lib
> + rcu_lib
> source_org
> dev_kit_build_system
> dev_kit_root_make_help
> diff --git a/doc/guides/prog_guide/rcu_lib.rst
> b/doc/guides/prog_guide/rcu_lib.rst
> new file mode 100644
> index 000000000..5155dd35c
> --- /dev/null
> +++ b/doc/guides/prog_guide/rcu_lib.rst
> @@ -0,0 +1,179 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2019 Arm Limited.
> +
> +.. _RCU_Library:
> +
> +RCU Library
> +============
> +
> +Lock-less data structures provide scalability and determinism.
> +They enable use cases where locking may not be allowed (for ex:
> +real-time applications).
> +
> +In the following paras, the term 'memory' refers to memory allocated by
> +typical APIs like malloc or anything that is representative of memory,
> +for ex: an index of a free element array.
> +
> +Since these data structures are lock less, the writers and readers are
> +accessing the data structures concurrently. Hence, while removing an
> +element from a data structure, the writers cannot return the memory to
> +the allocator, without knowing that the readers are not referencing
> +that element/memory anymore. Hence, it is required to separate the
> +operation of removing an element into 2 steps:
> +
> +Delete: in this step, the writer removes the reference to the element
> +from the data structure but does not return the associated memory to
> +the allocator. This will ensure that new readers will not get a
> +reference to the removed element. Removing the reference is an atomic
> operation.
> +
> +Free(Reclaim): in this step, the writer returns the memory to the
> +memory allocator, only after knowing that all the readers have stopped
> +referencing the deleted element.
> +
> +This library helps the writer determine when it is safe to free the
> +memory.
> +
> +This library makes use of thread Quiescent State (QS).
> +
> +What is Quiescent State
> +-----------------------
> +Quiescent State can be defined as 'any point in the thread execution
> +where the thread does not hold a reference to shared memory'. It is
> +upto the application to determine its quiescent state.
> +
> +Let us consider the following diagram:
> +
> +.. figure:: img/rcu_general_info.*
> +
> +
> +As shown, reader thread 1 acesses data structures D1 and D2. When it is
Spelling acesses / accesses
> +accessing D1, if the writer has to remove an element from D1, the
> +writer cannot free the memory associated with that element immediately.
> +The writer can return the memory to the allocator only after the reader
> +stops referencng D1. In other words, reader thread RT1 has to enter a
Spelling referencng / referencing
> +quiescent state.
> +
> +Similarly, since reader thread 2 is also accessing D1, writer has to
> +wait till thread 2 enters quiescent state as well.
> +
> +However, the writer does not need to wait for reader thread 3 to enter
> +quiescent state. Reader thread 3 was not accessing D1 when the delete
> +operation happened. So, reader thread 1 will not have a reference to
> +the deleted entry.
> +
> +It can be noted that, the critical sections for D2 is a quiescent state
> +for D1. i.e. for a given data structure Dx, any point in the thread
> +execution that does not reference Dx is a quiescent state.
> +
> +Since memory is not freed immediately, there might be a need for
> +provisioning of additional memory, depending on the application
> requirements.
> +
> +Factores affecting RCU mechanism
> +---------------------------------
Spelling Factores/ Factors
> +
> +It is important to make sure that this library keeps the over head of
Over head / overhead
> +identifying the end of grace period and subsequent freeing of memory,
> +to a minimum. The following paras explain how grace period and critical
> +section affect this overhead.
> +
> +The writer has to poll the readers to identify the end of grace period.
> +Polling introduces memory accesses and wastes CPU cycles. The memory is
> +not available for reuse during grace period. Longer grace periods
> +exasperate these conditions.
> +
> +The length of the critical section and the number of reader threads is
> +proportional to the duration of the grace period. Keeping the critical
> +sections smaller will keep the grace period smaller. However, keeping
> +the critical sections smaller requires additional CPU cycles(due to
> +additional
> +reporting) in the readers.
> +
> +Hence, we need the characteristics of small grace period and large
> +critical section. This library addresses this by allowing the writer to
> +do other work without having to block till the readers report their
> +quiescent state.
> +
> +RCU in DPDK
> +-----------
> +
> +For DPDK applications, the start and end of while(1) loop (where no
> +references to shared data structures are kept) act as perfect quiescent
> +states. This will combine all the shared data structure accesses into a
> +single, large critical section which helps keep the over head on the
Over head / overhead
> +reader side to a minimum.
> +
> +DPDK supports pipeline model of packet processing and service cores.
> +In these use cases, a given data structure may not be used by all the
> +workers in the application. The writer does not have to wait for all
> +the workers to report their quiescent state. To provide the required
> +flexibility, this library has a concept of QS variable. The application
> +can create one QS variable per data structure to help it track the end
> +of grace period for each data structure. This helps keep the grace
> +period to a minimum.
> +
> +How to use this library
> +-----------------------
> +
> +The application has to allocate memory and initialize a QS variable.
> +
Maybe instead of making the call below bold using `` `` might be better to use
**rte_rcu_qsbr_get_memsize**
``rte_rcu_qsbr_get_memsize``
For all of them below
> +Application can call **rte_rcu_qsbr_get_memsize** to calculate the size
> +of memory to allocate. This API takes maximum number of reader threads,
> +using this variable, as a parameter. Currently, a maximum of 1024
> +threads are supported.
> +
> +Further, the application can initialize a QS variable using the API
> +**rte_rcu_qsbr_init**.
> +
> +Each reader thread is assumed to have a unique thread ID. Currently,
> +the management of the thread ID (for ex: allocation/free) is left to
> +the application. The thread ID should be in the range of 0 to maximum
> +number of threads provided while creating the QS variable.
> +The application could also use lcore_id as the thread ID where applicable.
> +
> +**rte_rcu_qsbr_thread_register** API will register a reader thread to
> +report its quiescent state. This can be called from a reader thread.
> +A control plane thread can also call this on behalf of a reader thread.
> +The reader thread must call **rte_rcu_qsbr_thread_online** API to start
> +reporting its quiescent state.
> +
> +Some of the use cases might require the reader threads to make blocking
> +API calls (for ex: while using eventdev APIs). The writer thread should
> +not wait for such reader threads to enter quiescent state.
> +The reader thread must call **rte_rcu_qsbr_thread_offline** API, before
> +calling blocking APIs. It can call **rte_rcu_qsbr_thread_online** API
> +once the blocking API call returns.
> +
> +The writer thread can trigger the reader threads to report their
> +quiescent state by calling the API **rte_rcu_qsbr_start**. It is
> +possible for multiple writer threads to query the quiescent state
> +status simultaneously. Hence,
> +**rte_rcu_qsbr_start** returns a token to each caller.
> +
> +The writer thread has to call **rte_rcu_qsbr_check** API with the token
> +to get the current quiescent state status. Option to block till all the
> +reader threads enter the quiescent state is provided. If this API
> +indicates that all the reader threads have entered the quiescent state,
> +the application can free the deleted entry.
> +
> +The APIs **rte_rcu_qsbr_start** and **rte_rcu_qsbr_check** are lock
> free.
> +Hence, they can be called concurrently from multiple writers even while
> +running as worker threads.
> +
> +The separation of triggering the reporting from querying the status
> +provides the writer threads flexibility to do useful work instead of
> +blocking for the reader threads to enter the quiescent state or go
> +offline. This reduces the memory accesses due to continuous polling for the
> status.
> +
> +**rte_rcu_qsbr_synchronize** API combines the functionality of
> +**rte_rcu_qsbr_start** and blocking **rte_rcu_qsbr_check** into a single
> API.
> +This API triggers the reader threads to report their quiescent state
> +and polls till all the readers enter the quiescent state or go offline.
> +This API does not allow the writer to do useful work while waiting and
> +also introduces additional memory accesses due to continuous polling.
> +
> +The reader thread must call **rte_rcu_qsbr_thread_offline** and
> +**rte_rcu_qsbr_thread_unregister** APIs to remove itself from reporting
> +its quiescent state. The **rte_rcu_qsbr_check** API will not wait for
> +this reader thread to report the quiescent state status anymore.
> +
> +The reader threads should call **rte_rcu_qsbr_update** API to indicate
> +that they entered a quiescent state. This API checks if a writer has
> +triggered a quiescent state query and update the state accordingly.
> --
If it's possible to enlarge the image a bit it would be good to be able to read the lower text
I need to enlarge it to 175% maybe I'm just blind but if it's possible it would be great
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
2019-03-25 11:34 ` Kovacevic, Marko
@ 2019-03-25 11:34 ` Kovacevic, Marko
2019-03-26 4:43 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Kovacevic, Marko @ 2019-03-25 11:34 UTC (permalink / raw)
To: Honnappa Nagarahalli, Ananyev, Konstantin, stephen, paulmck, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> Subject: [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
>
> Add lib_rcu QSBR API and programmer guide documentation.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>
> ---
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf.in | 1 +
> .../prog_guide/img/rcu_general_info.svg | 494 ++++++++++++++++++
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/rcu_lib.rst | 179 +++++++
> 5 files changed, 677 insertions(+), 1 deletion(-) create mode 100644
> doc/guides/prog_guide/img/rcu_general_info.svg
> create mode 100644 doc/guides/prog_guide/rcu_lib.rst
>
<...>
> diff --git a/doc/guides/prog_guide/index.rst
> b/doc/guides/prog_guide/index.rst index 6726b1e8d..6fb3fb921 100644
> --- a/doc/guides/prog_guide/index.rst
> +++ b/doc/guides/prog_guide/index.rst
> @@ -55,6 +55,7 @@ Programmer's Guide
> metrics_lib
> bpf_lib
> ipsec_lib
> + rcu_lib
> source_org
> dev_kit_build_system
> dev_kit_root_make_help
> diff --git a/doc/guides/prog_guide/rcu_lib.rst
> b/doc/guides/prog_guide/rcu_lib.rst
> new file mode 100644
> index 000000000..5155dd35c
> --- /dev/null
> +++ b/doc/guides/prog_guide/rcu_lib.rst
> @@ -0,0 +1,179 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2019 Arm Limited.
> +
> +.. _RCU_Library:
> +
> +RCU Library
> +============
> +
> +Lock-less data structures provide scalability and determinism.
> +They enable use cases where locking may not be allowed (for ex:
> +real-time applications).
> +
> +In the following paras, the term 'memory' refers to memory allocated by
> +typical APIs like malloc or anything that is representative of memory,
> +for ex: an index of a free element array.
> +
> +Since these data structures are lock less, the writers and readers are
> +accessing the data structures concurrently. Hence, while removing an
> +element from a data structure, the writers cannot return the memory to
> +the allocator, without knowing that the readers are not referencing
> +that element/memory anymore. Hence, it is required to separate the
> +operation of removing an element into 2 steps:
> +
> +Delete: in this step, the writer removes the reference to the element
> +from the data structure but does not return the associated memory to
> +the allocator. This will ensure that new readers will not get a
> +reference to the removed element. Removing the reference is an atomic
> operation.
> +
> +Free(Reclaim): in this step, the writer returns the memory to the
> +memory allocator, only after knowing that all the readers have stopped
> +referencing the deleted element.
> +
> +This library helps the writer determine when it is safe to free the
> +memory.
> +
> +This library makes use of thread Quiescent State (QS).
> +
> +What is Quiescent State
> +-----------------------
> +Quiescent State can be defined as 'any point in the thread execution
> +where the thread does not hold a reference to shared memory'. It is
> +upto the application to determine its quiescent state.
> +
> +Let us consider the following diagram:
> +
> +.. figure:: img/rcu_general_info.*
> +
> +
> +As shown, reader thread 1 acesses data structures D1 and D2. When it is
Spelling acesses / accesses
> +accessing D1, if the writer has to remove an element from D1, the
> +writer cannot free the memory associated with that element immediately.
> +The writer can return the memory to the allocator only after the reader
> +stops referencng D1. In other words, reader thread RT1 has to enter a
Spelling referencng / referencing
> +quiescent state.
> +
> +Similarly, since reader thread 2 is also accessing D1, writer has to
> +wait till thread 2 enters quiescent state as well.
> +
> +However, the writer does not need to wait for reader thread 3 to enter
> +quiescent state. Reader thread 3 was not accessing D1 when the delete
> +operation happened. So, reader thread 1 will not have a reference to
> +the deleted entry.
> +
> +It can be noted that, the critical sections for D2 is a quiescent state
> +for D1. i.e. for a given data structure Dx, any point in the thread
> +execution that does not reference Dx is a quiescent state.
> +
> +Since memory is not freed immediately, there might be a need for
> +provisioning of additional memory, depending on the application
> requirements.
> +
> +Factores affecting RCU mechanism
> +---------------------------------
Spelling Factores/ Factors
> +
> +It is important to make sure that this library keeps the over head of
Over head / overhead
> +identifying the end of grace period and subsequent freeing of memory,
> +to a minimum. The following paras explain how grace period and critical
> +section affect this overhead.
> +
> +The writer has to poll the readers to identify the end of grace period.
> +Polling introduces memory accesses and wastes CPU cycles. The memory is
> +not available for reuse during grace period. Longer grace periods
> +exasperate these conditions.
> +
> +The length of the critical section and the number of reader threads is
> +proportional to the duration of the grace period. Keeping the critical
> +sections smaller will keep the grace period smaller. However, keeping
> +the critical sections smaller requires additional CPU cycles(due to
> +additional
> +reporting) in the readers.
> +
> +Hence, we need the characteristics of small grace period and large
> +critical section. This library addresses this by allowing the writer to
> +do other work without having to block till the readers report their
> +quiescent state.
> +
> +RCU in DPDK
> +-----------
> +
> +For DPDK applications, the start and end of while(1) loop (where no
> +references to shared data structures are kept) act as perfect quiescent
> +states. This will combine all the shared data structure accesses into a
> +single, large critical section which helps keep the over head on the
Over head / overhead
> +reader side to a minimum.
> +
> +DPDK supports pipeline model of packet processing and service cores.
> +In these use cases, a given data structure may not be used by all the
> +workers in the application. The writer does not have to wait for all
> +the workers to report their quiescent state. To provide the required
> +flexibility, this library has a concept of QS variable. The application
> +can create one QS variable per data structure to help it track the end
> +of grace period for each data structure. This helps keep the grace
> +period to a minimum.
> +
> +How to use this library
> +-----------------------
> +
> +The application has to allocate memory and initialize a QS variable.
> +
Maybe instead of making the call below bold using `` `` might be better to use
**rte_rcu_qsbr_get_memsize**
``rte_rcu_qsbr_get_memsize``
For all of them below
> +Application can call **rte_rcu_qsbr_get_memsize** to calculate the size
> +of memory to allocate. This API takes maximum number of reader threads,
> +using this variable, as a parameter. Currently, a maximum of 1024
> +threads are supported.
> +
> +Further, the application can initialize a QS variable using the API
> +**rte_rcu_qsbr_init**.
> +
> +Each reader thread is assumed to have a unique thread ID. Currently,
> +the management of the thread ID (for ex: allocation/free) is left to
> +the application. The thread ID should be in the range of 0 to maximum
> +number of threads provided while creating the QS variable.
> +The application could also use lcore_id as the thread ID where applicable.
> +
> +**rte_rcu_qsbr_thread_register** API will register a reader thread to
> +report its quiescent state. This can be called from a reader thread.
> +A control plane thread can also call this on behalf of a reader thread.
> +The reader thread must call **rte_rcu_qsbr_thread_online** API to start
> +reporting its quiescent state.
> +
> +Some of the use cases might require the reader threads to make blocking
> +API calls (for ex: while using eventdev APIs). The writer thread should
> +not wait for such reader threads to enter quiescent state.
> +The reader thread must call **rte_rcu_qsbr_thread_offline** API, before
> +calling blocking APIs. It can call **rte_rcu_qsbr_thread_online** API
> +once the blocking API call returns.
> +
> +The writer thread can trigger the reader threads to report their
> +quiescent state by calling the API **rte_rcu_qsbr_start**. It is
> +possible for multiple writer threads to query the quiescent state
> +status simultaneously. Hence,
> +**rte_rcu_qsbr_start** returns a token to each caller.
> +
> +The writer thread has to call **rte_rcu_qsbr_check** API with the token
> +to get the current quiescent state status. Option to block till all the
> +reader threads enter the quiescent state is provided. If this API
> +indicates that all the reader threads have entered the quiescent state,
> +the application can free the deleted entry.
> +
> +The APIs **rte_rcu_qsbr_start** and **rte_rcu_qsbr_check** are lock
> free.
> +Hence, they can be called concurrently from multiple writers even while
> +running as worker threads.
> +
> +The separation of triggering the reporting from querying the status
> +provides the writer threads flexibility to do useful work instead of
> +blocking for the reader threads to enter the quiescent state or go
> +offline. This reduces the memory accesses due to continuous polling for the
> status.
> +
> +**rte_rcu_qsbr_synchronize** API combines the functionality of
> +**rte_rcu_qsbr_start** and blocking **rte_rcu_qsbr_check** into a single
> API.
> +This API triggers the reader threads to report their quiescent state
> +and polls till all the readers enter the quiescent state or go offline.
> +This API does not allow the writer to do useful work while waiting and
> +also introduces additional memory accesses due to continuous polling.
> +
> +The reader thread must call **rte_rcu_qsbr_thread_offline** and
> +**rte_rcu_qsbr_thread_unregister** APIs to remove itself from reporting
> +its quiescent state. The **rte_rcu_qsbr_check** API will not wait for
> +this reader thread to report the quiescent state status anymore.
> +
> +The reader threads should call **rte_rcu_qsbr_update** API to indicate
> +that they entered a quiescent state. This API checks if a writer has
> +triggered a quiescent state query and update the state accordingly.
> --
If it's possible to enlarge the image a bit it would be good to be able to read the lower text
I need to enlarge it to 175% maybe I'm just blind but if it's possible it would be great
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-22 16:42 ` Ananyev, Konstantin
2019-03-22 16:42 ` Ananyev, Konstantin
@ 2019-03-26 4:35 ` Honnappa Nagarahalli
2019-03-26 4:35 ` Honnappa Nagarahalli
2019-03-28 11:15 ` Ananyev, Konstantin
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-26 4:35 UTC (permalink / raw)
To: Ananyev, Konstantin, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd, nd
> Hi Honnappa,
>
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.c
> > b/lib/librte_rcu/rte_rcu_qsbr.c new file mode 100644 index
> > 000000000..0fc4515ea
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> > @@ -0,0 +1,99 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + *
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#include <stdio.h>
> > +#include <string.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_log.h>
> > +#include <rte_memory.h>
> > +#include <rte_malloc.h>
> > +#include <rte_eal.h>
> > +#include <rte_eal_memconfig.h>
> > +#include <rte_atomic.h>
> > +#include <rte_per_lcore.h>
> > +#include <rte_lcore.h>
> > +#include <rte_errno.h>
> > +
> > +#include "rte_rcu_qsbr.h"
> > +
> > +/* Get the memory size of QSBR variable */ size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads) {
> > + size_t sz;
> > +
> > + RTE_ASSERT(max_threads == 0);
>
> Here and in all similar places:
> assert() will abort when its condition will be evaluated to false.
> So it should be max_threads != 0.
Thanks for this comment. Enabling RTE_ENABLE_ASSERT resulted in more problems. I will fix in the next version.
> Also it a public and non-datapath function.
> Calling assert() for invalid input parameter - seems way too extreme.
> Why not just return error to the caller?
Ok, I will change it.
>
> > +
> > + sz = sizeof(struct rte_rcu_qsbr);
> > +
> > + /* Add the size of quiescent state counter array */
> > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > +
> > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); }
> > +
> > +/* Initialize a quiescent state variable */ void __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) {
> > + RTE_ASSERT(v == NULL);
> > +
> > + memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
> > + v->m_threads = max_threads;
> > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > + v->token = RTE_QSBR_CNT_INIT;
> > +}
> > +
> > +/* Dump the details of a single quiescent state variable to a file.
> > +*/ void __rte_experimental rte_rcu_qsbr_dump(FILE *f, struct
> > +rte_rcu_qsbr *v) {
> > + uint64_t bmap;
> > + uint32_t i, t;
> > +
> > + RTE_ASSERT(v == NULL || f == NULL);
> > +
> > + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> > +
> > + fprintf(f, " QS variable memory size = %lu\n",
> > + rte_rcu_qsbr_get_memsize(v->m_threads));
> > + fprintf(f, " Given # max threads = %u\n", v->m_threads);
> > +
> > + fprintf(f, " Registered thread ID mask = 0x");
> > + for (i = 0; i < v->num_elems; i++)
> > + fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
> > + __ATOMIC_ACQUIRE));
> > + fprintf(f, "\n");
> > +
> > + fprintf(f, " Token = %lu\n",
> > + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> > +
> > + fprintf(f, "Quiescent State Counts for readers:\n");
> > + for (i = 0; i < v->num_elems; i++) {
> > + bmap = __atomic_load_n(&v->reg_thread_id[i],
> __ATOMIC_ACQUIRE);
> > + while (bmap) {
> > + t = __builtin_ctzl(bmap);
> > + fprintf(f, "thread ID = %d, count = %lu\n", t,
> > + __atomic_load_n(
> > + &RTE_QSBR_CNT_ARRAY_ELM(v, i)-
> >cnt,
> > + __ATOMIC_RELAXED));
> > + bmap &= ~(1UL << t);
> > + }
> > + }
> > +}
> > +
> > +int rcu_log_type;
> > +
> > +RTE_INIT(rte_rcu_register)
> > +{
> > + rcu_log_type = rte_log_register("lib.rcu");
> > + if (rcu_log_type >= 0)
> > + rte_log_set_level(rcu_log_type, RTE_LOG_ERR); }
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > 000000000..83943f751
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > @@ -0,0 +1,511 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_RCU_QSBR_H_
> > +#define _RTE_RCU_QSBR_H_
> > +
> > +/**
> > + * @file
> > + * RTE Quiescent State Based Reclamation (QSBR)
> > + *
> > + * Quiescent State (QS) is any point in the thread execution
> > + * where the thread does not hold a reference to a data structure
> > + * in shared memory. While using lock-less data structures, the
> > +writer
> > + * can safely free memory once all the reader threads have entered
> > + * quiescent state.
> > + *
> > + * This library provides the ability for the readers to report
> > +quiescent
> > + * state and for the writers to identify when all the readers have
> > + * entered quiescent state.
> > + */
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#include <stdio.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +#include <rte_common.h>
> > +#include <rte_memory.h>
> > +#include <rte_lcore.h>
> > +#include <rte_debug.h>
> > +
> > +extern int rcu_log_type;
> > +
> > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define RCU_DP_LOG(level,
> fmt,
> > +args...) \
> > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > +RCU_DP_LOG(level, fmt, args...) #endif
>
> Why do you need that?
> Can't you use RTE_LOG_DP() instead?
RTE_LOG_DP is for static log types such as RTE_LOGTYPE_EAL, RTE_LOGTYPE_MBUF etc. Use of static log type in RCU was rejected earlier. Hence, I am using the dynamic log types.
>
> > +
> > +/* Registered thread IDs are stored as a bitmap of 64b element array.
> > + * Given thread id needs to be converted to index into the array and
> > + * the id within the array element.
> > + */
> > +#define RTE_RCU_MAX_THREADS 1024
> > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8) #define
> > +RTE_QSBR_THRID_ARRAY_ELEMS \
> > + (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> RTE_QSBR_THRID_ARRAY_ELM_SIZE)
> > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define RTE_QSBR_THRID_MASK
> 0x3f
> > +#define RTE_QSBR_THRID_INVALID 0xffffffff
> > +
> > +/* Worker thread counter */
> > +struct rte_rcu_qsbr_cnt {
> > + uint64_t cnt;
> > + /**< Quiescent state counter. Value 0 indicates the thread is
> > +offline */ } __rte_cache_aligned;
> > +
> > +#define RTE_QSBR_CNT_ARRAY_ELM(v, i) (((struct rte_rcu_qsbr_cnt *)(v
> > ++ 1)) + i)
>
> You can probably add
> struct rte_rcu_qsbr_cnt cnt[0];
> at the end of struct rte_rcu_qsbr, then wouldn't need macro above.
ok
>
> > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > +#define RTE_QSBR_CNT_INIT 1
> > +
> > +/**
> > + * RTE thread Quiescent State structure.
> > + * Quiescent state counter array (array of 'struct
> > +rte_rcu_qsbr_cnt'),
> > + * whose size is dependent on the maximum number of reader threads
> > + * (m_threads) using this variable is stored immediately following
> > + * this structure.
> > + */
> > +struct rte_rcu_qsbr {
> > + uint64_t token __rte_cache_aligned;
> > + /**< Counter to allow for multiple simultaneous QS queries */
> > +
> > + uint32_t num_elems __rte_cache_aligned;
> > + /**< Number of elements in the thread ID array */
> > + uint32_t m_threads;
> > + /**< Maximum number of threads this RCU variable will use */
> > +
> > + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS]
> __rte_cache_aligned;
> > + /**< Registered thread IDs are stored in a bitmap array */
>
>
> As I understand you ended up with fixed size array to avoid 2 variable size
> arrays in this struct?
Yes
> Is that big penalty for register/unregister() to either store a pointer to bitmap,
> or calculate it based on num_elems value?
In the last RFC I sent out [1], I tested the impact of having non-fixed size array. There 'was' a performance degradation in most of the performance tests. The issue was with calculating the address of per thread QSBR counters (not with the address calculation of the bitmap). With the current patch, I do not see the performance difference (the difference between the RFC and this patch are the memory orderings, they are masking any perf gain from having a fixed array). However, I have kept the fixed size array as the generated code does not have additional calculations to get the address of qsbr counter array elements.
[1] http://mails.dpdk.org/archives/dev/2019-February/125029.html
> As another thought - do we really need bitmap at all?
The bit map is helping avoid accessing all the elements in rte_rcu_qsbr_cnt array (as you have mentioned below). This provides the ability to scale the number of threads dynamically. For ex: an application can create a qsbr variable with 48 max threads, but currently only 2 threads are active (due to traffic conditions).
> Might it is possible to sotre register value for each thread inside it's
> rte_rcu_qsbr_cnt:
> struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;}
> __rte_cache_aligned; ?
> That would cause check() to walk through all elems in rte_rcu_qsbr_cnt array,
> but from other side would help to avoid cache conflicts for register/unregister.
With the addition of rte_rcu_qsbr_thread_online/offline APIs, the register/unregister APIs are not in critical path anymore. Hence, the cache conflicts are fine. The online/offline APIs work on thread specific cache lines and these are in the critical path.
>
> > +} __rte_cache_aligned;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Return the size of the memory occupied by a Quiescent State variable.
> > + *
> > + * @param max_threads
> > + * Maximum number of threads reporting quiescent state on this variable.
> > + * @return
> > + * Size of memory in bytes required for this QS variable.
> > + */
> > +size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Initialize a Quiescent State (QS) variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param max_threads
> > + * Maximum number of threads reporting QS on this variable.
> > + *
> > + */
> > +void __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Register a reader thread to report its quiescent state
> > + * on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + * Any reader thread that wants to report its quiescent state must
> > + * call this API. This can be called during initialization or as part
> > + * of the packet processing loop.
> > + *
> > + * Note that rte_rcu_qsbr_thread_online must be called before the
> > + * thread updates its QS using rte_rcu_qsbr_update.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Release the new register thread ID to other threads
> > + * calling rte_rcu_qsbr_check.
> > + */
> > + __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id,
> > +__ATOMIC_RELEASE); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a reader thread, from the list of threads reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * This API can be called from the reader threads during shutdown.
> > + * Ongoing QS queries will stop waiting for the status from this
> > + * unregistered reader thread.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will stop reporting its quiescent
> > + * state on the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Make sure the removal of the thread from the list of
> > + * reporting threads is visible before the thread
> > + * does anything else.
> > + */
> > + __atomic_fetch_and(&v->reg_thread_id[i],
> > + ~(1UL << id), __ATOMIC_RELEASE);
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a registered reader thread, to the list of threads reporting
> > +their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * Any registered reader thread that wants to report its quiescent
> > +state must
> > + * call this API before calling rte_rcu_qsbr_update. This can be
> > +called
> > + * during initialization or as part of the packet processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * The reader thread must call rte_rcu_thread_online API, after the
> > +blocking
> > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > + * waits for the reader thread to update its QS.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> > +
> > + /* Copy the current value of token.
> > + * The fence at the end of the function will ensure that
> > + * the following will not move down after the load of any shared
> > + * data structure.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > +
> > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > + * 'cnt' (64b) is accessed atomically.
> > + */
> > + __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
> > + t, __ATOMIC_RELAXED);
> > +
> > + /* The subsequent load of the data structure should not
> > + * move above the store. Hence a store-load barrier
> > + * is required.
> > + * If the load of the data structure moves above the store,
> > + * writer might not see that the reader is online, even though
> > + * the reader is referencing the shared data structure.
> > + */
> > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
>
> If it has to generate a proper memory-barrier here anyway, could it use
> rte_smp_mb() here?
> At least for IA it would generate more lightweight one.
I have used the C++11 memory model functions. I prefer to not mix it with barriers. Does ICC generate lightweight code for the above fence?
Is it ok to add rte_smp_mb for x86 alone?
> Konstantin
>
> > +}
> > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-26 4:35 ` Honnappa Nagarahalli
@ 2019-03-26 4:35 ` Honnappa Nagarahalli
2019-03-28 11:15 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-26 4:35 UTC (permalink / raw)
To: Ananyev, Konstantin, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd, nd
> Hi Honnappa,
>
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.c
> > b/lib/librte_rcu/rte_rcu_qsbr.c new file mode 100644 index
> > 000000000..0fc4515ea
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> > @@ -0,0 +1,99 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + *
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#include <stdio.h>
> > +#include <string.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_log.h>
> > +#include <rte_memory.h>
> > +#include <rte_malloc.h>
> > +#include <rte_eal.h>
> > +#include <rte_eal_memconfig.h>
> > +#include <rte_atomic.h>
> > +#include <rte_per_lcore.h>
> > +#include <rte_lcore.h>
> > +#include <rte_errno.h>
> > +
> > +#include "rte_rcu_qsbr.h"
> > +
> > +/* Get the memory size of QSBR variable */ size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads) {
> > + size_t sz;
> > +
> > + RTE_ASSERT(max_threads == 0);
>
> Here and in all similar places:
> assert() will abort when its condition will be evaluated to false.
> So it should be max_threads != 0.
Thanks for this comment. Enabling RTE_ENABLE_ASSERT resulted in more problems. I will fix in the next version.
> Also it a public and non-datapath function.
> Calling assert() for invalid input parameter - seems way too extreme.
> Why not just return error to the caller?
Ok, I will change it.
>
> > +
> > + sz = sizeof(struct rte_rcu_qsbr);
> > +
> > + /* Add the size of quiescent state counter array */
> > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > +
> > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); }
> > +
> > +/* Initialize a quiescent state variable */ void __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) {
> > + RTE_ASSERT(v == NULL);
> > +
> > + memset(v, 0, rte_rcu_qsbr_get_memsize(max_threads));
> > + v->m_threads = max_threads;
> > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > + v->token = RTE_QSBR_CNT_INIT;
> > +}
> > +
> > +/* Dump the details of a single quiescent state variable to a file.
> > +*/ void __rte_experimental rte_rcu_qsbr_dump(FILE *f, struct
> > +rte_rcu_qsbr *v) {
> > + uint64_t bmap;
> > + uint32_t i, t;
> > +
> > + RTE_ASSERT(v == NULL || f == NULL);
> > +
> > + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> > +
> > + fprintf(f, " QS variable memory size = %lu\n",
> > + rte_rcu_qsbr_get_memsize(v->m_threads));
> > + fprintf(f, " Given # max threads = %u\n", v->m_threads);
> > +
> > + fprintf(f, " Registered thread ID mask = 0x");
> > + for (i = 0; i < v->num_elems; i++)
> > + fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
> > + __ATOMIC_ACQUIRE));
> > + fprintf(f, "\n");
> > +
> > + fprintf(f, " Token = %lu\n",
> > + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> > +
> > + fprintf(f, "Quiescent State Counts for readers:\n");
> > + for (i = 0; i < v->num_elems; i++) {
> > + bmap = __atomic_load_n(&v->reg_thread_id[i],
> __ATOMIC_ACQUIRE);
> > + while (bmap) {
> > + t = __builtin_ctzl(bmap);
> > + fprintf(f, "thread ID = %d, count = %lu\n", t,
> > + __atomic_load_n(
> > + &RTE_QSBR_CNT_ARRAY_ELM(v, i)-
> >cnt,
> > + __ATOMIC_RELAXED));
> > + bmap &= ~(1UL << t);
> > + }
> > + }
> > +}
> > +
> > +int rcu_log_type;
> > +
> > +RTE_INIT(rte_rcu_register)
> > +{
> > + rcu_log_type = rte_log_register("lib.rcu");
> > + if (rcu_log_type >= 0)
> > + rte_log_set_level(rcu_log_type, RTE_LOG_ERR); }
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > 000000000..83943f751
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > @@ -0,0 +1,511 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_RCU_QSBR_H_
> > +#define _RTE_RCU_QSBR_H_
> > +
> > +/**
> > + * @file
> > + * RTE Quiescent State Based Reclamation (QSBR)
> > + *
> > + * Quiescent State (QS) is any point in the thread execution
> > + * where the thread does not hold a reference to a data structure
> > + * in shared memory. While using lock-less data structures, the
> > +writer
> > + * can safely free memory once all the reader threads have entered
> > + * quiescent state.
> > + *
> > + * This library provides the ability for the readers to report
> > +quiescent
> > + * state and for the writers to identify when all the readers have
> > + * entered quiescent state.
> > + */
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#include <stdio.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +#include <rte_common.h>
> > +#include <rte_memory.h>
> > +#include <rte_lcore.h>
> > +#include <rte_debug.h>
> > +
> > +extern int rcu_log_type;
> > +
> > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define RCU_DP_LOG(level,
> fmt,
> > +args...) \
> > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > +RCU_DP_LOG(level, fmt, args...) #endif
>
> Why do you need that?
> Can't you use RTE_LOG_DP() instead?
RTE_LOG_DP is for static log types such as RTE_LOGTYPE_EAL, RTE_LOGTYPE_MBUF etc. Use of static log type in RCU was rejected earlier. Hence, I am using the dynamic log types.
>
> > +
> > +/* Registered thread IDs are stored as a bitmap of 64b element array.
> > + * Given thread id needs to be converted to index into the array and
> > + * the id within the array element.
> > + */
> > +#define RTE_RCU_MAX_THREADS 1024
> > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8) #define
> > +RTE_QSBR_THRID_ARRAY_ELEMS \
> > + (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> RTE_QSBR_THRID_ARRAY_ELM_SIZE)
> > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define RTE_QSBR_THRID_MASK
> 0x3f
> > +#define RTE_QSBR_THRID_INVALID 0xffffffff
> > +
> > +/* Worker thread counter */
> > +struct rte_rcu_qsbr_cnt {
> > + uint64_t cnt;
> > + /**< Quiescent state counter. Value 0 indicates the thread is
> > +offline */ } __rte_cache_aligned;
> > +
> > +#define RTE_QSBR_CNT_ARRAY_ELM(v, i) (((struct rte_rcu_qsbr_cnt *)(v
> > ++ 1)) + i)
>
> You can probably add
> struct rte_rcu_qsbr_cnt cnt[0];
> at the end of struct rte_rcu_qsbr, then wouldn't need macro above.
ok
>
> > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > +#define RTE_QSBR_CNT_INIT 1
> > +
> > +/**
> > + * RTE thread Quiescent State structure.
> > + * Quiescent state counter array (array of 'struct
> > +rte_rcu_qsbr_cnt'),
> > + * whose size is dependent on the maximum number of reader threads
> > + * (m_threads) using this variable is stored immediately following
> > + * this structure.
> > + */
> > +struct rte_rcu_qsbr {
> > + uint64_t token __rte_cache_aligned;
> > + /**< Counter to allow for multiple simultaneous QS queries */
> > +
> > + uint32_t num_elems __rte_cache_aligned;
> > + /**< Number of elements in the thread ID array */
> > + uint32_t m_threads;
> > + /**< Maximum number of threads this RCU variable will use */
> > +
> > + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS]
> __rte_cache_aligned;
> > + /**< Registered thread IDs are stored in a bitmap array */
>
>
> As I understand you ended up with fixed size array to avoid 2 variable size
> arrays in this struct?
Yes
> Is that big penalty for register/unregister() to either store a pointer to bitmap,
> or calculate it based on num_elems value?
In the last RFC I sent out [1], I tested the impact of having non-fixed size array. There 'was' a performance degradation in most of the performance tests. The issue was with calculating the address of per thread QSBR counters (not with the address calculation of the bitmap). With the current patch, I do not see the performance difference (the difference between the RFC and this patch are the memory orderings, they are masking any perf gain from having a fixed array). However, I have kept the fixed size array as the generated code does not have additional calculations to get the address of qsbr counter array elements.
[1] http://mails.dpdk.org/archives/dev/2019-February/125029.html
> As another thought - do we really need bitmap at all?
The bit map is helping avoid accessing all the elements in rte_rcu_qsbr_cnt array (as you have mentioned below). This provides the ability to scale the number of threads dynamically. For ex: an application can create a qsbr variable with 48 max threads, but currently only 2 threads are active (due to traffic conditions).
> Might it is possible to sotre register value for each thread inside it's
> rte_rcu_qsbr_cnt:
> struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;}
> __rte_cache_aligned; ?
> That would cause check() to walk through all elems in rte_rcu_qsbr_cnt array,
> but from other side would help to avoid cache conflicts for register/unregister.
With the addition of rte_rcu_qsbr_thread_online/offline APIs, the register/unregister APIs are not in critical path anymore. Hence, the cache conflicts are fine. The online/offline APIs work on thread specific cache lines and these are in the critical path.
>
> > +} __rte_cache_aligned;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Return the size of the memory occupied by a Quiescent State variable.
> > + *
> > + * @param max_threads
> > + * Maximum number of threads reporting quiescent state on this variable.
> > + * @return
> > + * Size of memory in bytes required for this QS variable.
> > + */
> > +size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Initialize a Quiescent State (QS) variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param max_threads
> > + * Maximum number of threads reporting QS on this variable.
> > + *
> > + */
> > +void __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Register a reader thread to report its quiescent state
> > + * on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + * Any reader thread that wants to report its quiescent state must
> > + * call this API. This can be called during initialization or as part
> > + * of the packet processing loop.
> > + *
> > + * Note that rte_rcu_qsbr_thread_online must be called before the
> > + * thread updates its QS using rte_rcu_qsbr_update.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Release the new register thread ID to other threads
> > + * calling rte_rcu_qsbr_check.
> > + */
> > + __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id,
> > +__ATOMIC_RELEASE); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a reader thread, from the list of threads reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * This API can be called from the reader threads during shutdown.
> > + * Ongoing QS queries will stop waiting for the status from this
> > + * unregistered reader thread.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will stop reporting its quiescent
> > + * state on the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Make sure the removal of the thread from the list of
> > + * reporting threads is visible before the thread
> > + * does anything else.
> > + */
> > + __atomic_fetch_and(&v->reg_thread_id[i],
> > + ~(1UL << id), __ATOMIC_RELEASE);
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a registered reader thread, to the list of threads reporting
> > +their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * Any registered reader thread that wants to report its quiescent
> > +state must
> > + * call this API before calling rte_rcu_qsbr_update. This can be
> > +called
> > + * during initialization or as part of the packet processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * The reader thread must call rte_rcu_thread_online API, after the
> > +blocking
> > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > + * waits for the reader thread to update its QS.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v == NULL || thread_id >= v->max_threads);
> > +
> > + /* Copy the current value of token.
> > + * The fence at the end of the function will ensure that
> > + * the following will not move down after the load of any shared
> > + * data structure.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > +
> > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > + * 'cnt' (64b) is accessed atomically.
> > + */
> > + __atomic_store_n(&RTE_QSBR_CNT_ARRAY_ELM(v, thread_id)->cnt,
> > + t, __ATOMIC_RELAXED);
> > +
> > + /* The subsequent load of the data structure should not
> > + * move above the store. Hence a store-load barrier
> > + * is required.
> > + * If the load of the data structure moves above the store,
> > + * writer might not see that the reader is online, even though
> > + * the reader is referencing the shared data structure.
> > + */
> > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
>
> If it has to generate a proper memory-barrier here anyway, could it use
> rte_smp_mb() here?
> At least for IA it would generate more lightweight one.
I have used the C++11 memory model functions. I prefer to not mix it with barriers. Does ICC generate lightweight code for the above fence?
Is it ok to add rte_smp_mb for x86 alone?
> Konstantin
>
> > +}
> > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
2019-03-25 11:34 ` Kovacevic, Marko
2019-03-25 11:34 ` Kovacevic, Marko
@ 2019-03-26 4:43 ` Honnappa Nagarahalli
2019-03-26 4:43 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-26 4:43 UTC (permalink / raw)
To: Kovacevic, Marko, Ananyev, Konstantin, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd
Hi Marko,
Thank you for your comments. I will make all the suggested changes in the next version.
<snip>
> > --
>
> If it's possible to enlarge the image a bit it would be good to be able to read
> the lower text I need to enlarge it to 175% maybe I'm just blind but if it's
> possible it would be great
I also think the image size is small. I had tried few things which did not work. I have few more ideas (may be just draw a bigger picture), I will try them out.
>
> Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation
2019-03-26 4:43 ` Honnappa Nagarahalli
@ 2019-03-26 4:43 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-26 4:43 UTC (permalink / raw)
To: Kovacevic, Marko, Ananyev, Konstantin, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd
Hi Marko,
Thank you for your comments. I will make all the suggested changes in the next version.
<snip>
> > --
>
> If it's possible to enlarge the image a bit it would be good to be able to read
> the lower text I need to enlarge it to 175% maybe I'm just blind but if it's
> possible it would be great
I also think the image size is small. I had tried few things which did not work. I have few more ideas (may be just draw a bigger picture), I will try them out.
>
> Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (6 preceding siblings ...)
2019-03-19 4:52 ` [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
` (3 more replies)
2019-04-01 17:10 ` [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (6 subsequent siblings)
14 siblings, 4 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 986 ++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 +++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 184 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 500 +++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3051 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 1/3] rcu: " Honnappa Nagarahalli
` (2 subsequent siblings)
3 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 986 ++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 +++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 184 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 500 +++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3051 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 184 +++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 500 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 738 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 452b8eb82..5827c1bbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1230,6 +1230,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 0b09a9348..d3557ff3c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -805,6 +805,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..b24a9363f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..ee7a99bca
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0 || max_threads > RTE_RCU_MAX_THREADS) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Release the new register thread ID to other threads
+ * calling rte_rcu_qsbr_check.
+ */
+ __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure the removal of the thread from the list of
+ * reporting threads is visible before the thread
+ * does anything else.
+ */
+ __atomic_fetch_and(&v->reg_thread_id[i],
+ ~(1UL << id), __ATOMIC_RELEASE);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..c837c8916
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,500 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_RCU_MAX_THREADS 1024
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_ELEMS \
+ (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) / RTE_QSBR_THRID_ARRAY_ELM_SIZE)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure. */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS] __rte_cache_aligned;
+ /**< Registered thread IDs are stored in a bitmap array */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0];
+ /**< Quiescent state counter array of 'max_threads' elements */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is either 0 or greater than RTE_RCU_MAX_THREADS
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is either 0 or greater than RTE_RCU_MAX_THREADS or
+ * 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t cnt;
+
+ RTE_ASSERT(v != NULL);
+
+ i = 0;
+ do {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id+j);
+ cnt = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait,
+ &v->qsbr_cnt[id + j].cnt, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(cnt != RTE_QSBR_CNT_THR_OFFLINE &&
+ cnt < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(
+ &v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+
+ i++;
+ } while (i < v->num_elems);
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..3feb44b75 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..2de0b5fc6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 1/3] rcu: " Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 184 +++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 500 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 738 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 452b8eb82..5827c1bbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1230,6 +1230,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 0b09a9348..d3557ff3c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -805,6 +805,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..b24a9363f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..ee7a99bca
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0 || max_threads > RTE_RCU_MAX_THREADS) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Release the new register thread ID to other threads
+ * calling rte_rcu_qsbr_check.
+ */
+ __atomic_fetch_or(&v->reg_thread_id[i], 1UL << id, __ATOMIC_RELEASE);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure the removal of the thread from the list of
+ * reporting threads is visible before the thread
+ * does anything else.
+ */
+ __atomic_fetch_and(&v->reg_thread_id[i],
+ ~(1UL << id), __ATOMIC_RELEASE);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(&v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..c837c8916
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,500 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_RCU_MAX_THREADS 1024
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_ELEMS \
+ (RTE_ALIGN_MUL_CEIL(RTE_RCU_MAX_THREADS, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) / RTE_QSBR_THRID_ARRAY_ELM_SIZE)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure. */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS] __rte_cache_aligned;
+ /**< Registered thread IDs are stored in a bitmap array */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0];
+ /**< Quiescent state counter array of 'max_threads' elements */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is either 0 or greater than RTE_RCU_MAX_THREADS
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is either 0 or greater than RTE_RCU_MAX_THREADS or
+ * 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t cnt;
+
+ RTE_ASSERT(v != NULL);
+
+ i = 0;
+ do {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(&v->reg_thread_id[i], __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id+j);
+ cnt = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait,
+ &v->qsbr_cnt[id + j].cnt, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(cnt != RTE_QSBR_CNT_THR_OFFLINE &&
+ cnt < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(
+ &v->reg_thread_id[i],
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+
+ i++;
+ } while (i < v->num_elems);
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..3feb44b75 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..2de0b5fc6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] test/rcu_qsbr: add API and functional tests
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 1/3] rcu: " Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 986 ++++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 +++++++++++++++++++++
5 files changed, 1621 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 89949c2bb..6b6dfefc2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -213,6 +213,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 5f87bb94d..c26ec889c 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -694,6 +694,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 05e5ddeb0..4df8e337b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -107,6 +107,8 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -132,7 +134,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -171,6 +174,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'string_autotest',
@@ -236,6 +240,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
]
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..ce3cfcf09
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,986 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1),
+ "Get Memsize for large number of threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ r = rte_rcu_qsbr_init(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "Large number of threads");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+
+ /* Test with enabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..a69f827d8
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] test/rcu_qsbr: add API and functional tests
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 986 ++++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 +++++++++++++++++++++
5 files changed, 1621 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 89949c2bb..6b6dfefc2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -213,6 +213,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 5f87bb94d..c26ec889c 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -694,6 +694,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 05e5ddeb0..4df8e337b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -107,6 +107,8 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -132,7 +134,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -171,6 +174,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'string_autotest',
@@ -236,6 +240,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
]
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..ce3cfcf09
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,986 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1),
+ "Get Memsize for large number of threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ r = rte_rcu_qsbr_init(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "Large number of threads");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+
+ /* Test with enabled lcore */
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..a69f827d8
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] doc/rcu: add lib_rcu documentation
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
3 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++++
5 files changed, 692 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..5c1f6b477 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..0b4c248a2 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..6fb3fb921 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..dfe45fa62
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] doc/rcu: add lib_rcu documentation
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-03-27 5:52 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-27 5:52 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++++
5 files changed, 692 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..5c1f6b477 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..0b4c248a2 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..6fb3fb921 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..dfe45fa62
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-26 4:35 ` Honnappa Nagarahalli
2019-03-26 4:35 ` Honnappa Nagarahalli
@ 2019-03-28 11:15 ` Ananyev, Konstantin
2019-03-28 11:15 ` Ananyev, Konstantin
2019-03-29 5:54 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-03-28 11:15 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd, nd
> >
> > > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > > +#define RTE_QSBR_CNT_INIT 1
> > > +
> > > +/**
> > > + * RTE thread Quiescent State structure.
> > > + * Quiescent state counter array (array of 'struct
> > > +rte_rcu_qsbr_cnt'),
> > > + * whose size is dependent on the maximum number of reader threads
> > > + * (m_threads) using this variable is stored immediately following
> > > + * this structure.
> > > + */
> > > +struct rte_rcu_qsbr {
> > > + uint64_t token __rte_cache_aligned;
> > > + /**< Counter to allow for multiple simultaneous QS queries */
> > > +
> > > + uint32_t num_elems __rte_cache_aligned;
> > > + /**< Number of elements in the thread ID array */
> > > + uint32_t m_threads;
> > > + /**< Maximum number of threads this RCU variable will use */
> > > +
> > > + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS]
> > __rte_cache_aligned;
> > > + /**< Registered thread IDs are stored in a bitmap array */
> >
> >
> > As I understand you ended up with fixed size array to avoid 2 variable size
> > arrays in this struct?
> Yes
>
> > Is that big penalty for register/unregister() to either store a pointer to bitmap,
> > or calculate it based on num_elems value?
> In the last RFC I sent out [1], I tested the impact of having non-fixed size array. There 'was' a performance degradation in most of the
> performance tests. The issue was with calculating the address of per thread QSBR counters (not with the address calculation of the bitmap).
> With the current patch, I do not see the performance difference (the difference between the RFC and this patch are the memory orderings,
> they are masking any perf gain from having a fixed array). However, I have kept the fixed size array as the generated code does not have
> additional calculations to get the address of qsbr counter array elements.
>
> [1] http://mails.dpdk.org/archives/dev/2019-February/125029.html
Ok I see, but can we then arrange them ina different way:
qsbr_cnt[] will start at the end of struct rte_rcu_qsbr
(same as you have it right now).
While bitmap will be placed after qsbr_cnt[].
As I understand register/unregister is not consider on critical path,
so some perf-degradation here doesn't matter.
Also check() would need extra address calculation for bitmap,
but considering that we have to go through all bitmap (and in worst case qsbr_cnt[])
anyway, that probably not a big deal?
>
> > As another thought - do we really need bitmap at all?
> The bit map is helping avoid accessing all the elements in rte_rcu_qsbr_cnt array (as you have mentioned below). This provides the ability to
> scale the number of threads dynamically. For ex: an application can create a qsbr variable with 48 max threads, but currently only 2 threads
> are active (due to traffic conditions).
I understand that bitmap supposed to speedup check() for
situations when most threads are unregistered.
My thought was that might be check() speedup for such situation is not that critical.
>
> > Might it is possible to sotre register value for each thread inside it's
> > rte_rcu_qsbr_cnt:
> > struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;}
> > __rte_cache_aligned; ?
> > That would cause check() to walk through all elems in rte_rcu_qsbr_cnt array,
> > but from other side would help to avoid cache conflicts for register/unregister.
> With the addition of rte_rcu_qsbr_thread_online/offline APIs, the register/unregister APIs are not in critical path anymore. Hence, the
> cache conflicts are fine. The online/offline APIs work on thread specific cache lines and these are in the critical path.
>
> >
> > > +} __rte_cache_aligned;
> > > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-28 11:15 ` Ananyev, Konstantin
@ 2019-03-28 11:15 ` Ananyev, Konstantin
2019-03-29 5:54 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-03-28 11:15 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd, nd
> >
> > > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > > +#define RTE_QSBR_CNT_INIT 1
> > > +
> > > +/**
> > > + * RTE thread Quiescent State structure.
> > > + * Quiescent state counter array (array of 'struct
> > > +rte_rcu_qsbr_cnt'),
> > > + * whose size is dependent on the maximum number of reader threads
> > > + * (m_threads) using this variable is stored immediately following
> > > + * this structure.
> > > + */
> > > +struct rte_rcu_qsbr {
> > > + uint64_t token __rte_cache_aligned;
> > > + /**< Counter to allow for multiple simultaneous QS queries */
> > > +
> > > + uint32_t num_elems __rte_cache_aligned;
> > > + /**< Number of elements in the thread ID array */
> > > + uint32_t m_threads;
> > > + /**< Maximum number of threads this RCU variable will use */
> > > +
> > > + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS]
> > __rte_cache_aligned;
> > > + /**< Registered thread IDs are stored in a bitmap array */
> >
> >
> > As I understand you ended up with fixed size array to avoid 2 variable size
> > arrays in this struct?
> Yes
>
> > Is that big penalty for register/unregister() to either store a pointer to bitmap,
> > or calculate it based on num_elems value?
> In the last RFC I sent out [1], I tested the impact of having non-fixed size array. There 'was' a performance degradation in most of the
> performance tests. The issue was with calculating the address of per thread QSBR counters (not with the address calculation of the bitmap).
> With the current patch, I do not see the performance difference (the difference between the RFC and this patch are the memory orderings,
> they are masking any perf gain from having a fixed array). However, I have kept the fixed size array as the generated code does not have
> additional calculations to get the address of qsbr counter array elements.
>
> [1] http://mails.dpdk.org/archives/dev/2019-February/125029.html
Ok I see, but can we then arrange them ina different way:
qsbr_cnt[] will start at the end of struct rte_rcu_qsbr
(same as you have it right now).
While bitmap will be placed after qsbr_cnt[].
As I understand register/unregister is not consider on critical path,
so some perf-degradation here doesn't matter.
Also check() would need extra address calculation for bitmap,
but considering that we have to go through all bitmap (and in worst case qsbr_cnt[])
anyway, that probably not a big deal?
>
> > As another thought - do we really need bitmap at all?
> The bit map is helping avoid accessing all the elements in rte_rcu_qsbr_cnt array (as you have mentioned below). This provides the ability to
> scale the number of threads dynamically. For ex: an application can create a qsbr variable with 48 max threads, but currently only 2 threads
> are active (due to traffic conditions).
I understand that bitmap supposed to speedup check() for
situations when most threads are unregistered.
My thought was that might be check() speedup for such situation is not that critical.
>
> > Might it is possible to sotre register value for each thread inside it's
> > rte_rcu_qsbr_cnt:
> > struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;}
> > __rte_cache_aligned; ?
> > That would cause check() to walk through all elems in rte_rcu_qsbr_cnt array,
> > but from other side would help to avoid cache conflicts for register/unregister.
> With the addition of rte_rcu_qsbr_thread_online/offline APIs, the register/unregister APIs are not in critical path anymore. Hence, the
> cache conflicts are fine. The online/offline APIs work on thread specific cache lines and these are in the critical path.
>
> >
> > > +} __rte_cache_aligned;
> > > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-28 11:15 ` Ananyev, Konstantin
2019-03-28 11:15 ` Ananyev, Konstantin
@ 2019-03-29 5:54 ` Honnappa Nagarahalli
2019-03-29 5:54 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-29 5:54 UTC (permalink / raw)
To: Ananyev, Konstantin, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd, nd
>
> > >
> > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define RTE_QSBR_CNT_INIT
> 1
> > > > +
> > > > +/**
> > > > + * RTE thread Quiescent State structure.
> > > > + * Quiescent state counter array (array of 'struct
> > > > +rte_rcu_qsbr_cnt'),
> > > > + * whose size is dependent on the maximum number of reader
> > > > +threads
> > > > + * (m_threads) using this variable is stored immediately
> > > > +following
> > > > + * this structure.
> > > > + */
> > > > +struct rte_rcu_qsbr {
> > > > + uint64_t token __rte_cache_aligned;
> > > > + /**< Counter to allow for multiple simultaneous QS queries */
> > > > +
> > > > + uint32_t num_elems __rte_cache_aligned;
> > > > + /**< Number of elements in the thread ID array */
> > > > + uint32_t m_threads;
> > > > + /**< Maximum number of threads this RCU variable will use */
> > > > +
> > > > + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS]
> > > __rte_cache_aligned;
> > > > + /**< Registered thread IDs are stored in a bitmap array */
> > >
> > >
> > > As I understand you ended up with fixed size array to avoid 2
> > > variable size arrays in this struct?
> > Yes
> >
> > > Is that big penalty for register/unregister() to either store a
> > > pointer to bitmap, or calculate it based on num_elems value?
> > In the last RFC I sent out [1], I tested the impact of having
> > non-fixed size array. There 'was' a performance degradation in most of the
> performance tests. The issue was with calculating the address of per thread
> QSBR counters (not with the address calculation of the bitmap).
> > With the current patch, I do not see the performance difference (the
> > difference between the RFC and this patch are the memory orderings,
> > they are masking any perf gain from having a fixed array). However, I have
> kept the fixed size array as the generated code does not have additional
> calculations to get the address of qsbr counter array elements.
> >
> > [1] http://mails.dpdk.org/archives/dev/2019-February/125029.html
>
> Ok I see, but can we then arrange them ina different way:
> qsbr_cnt[] will start at the end of struct rte_rcu_qsbr (same as you have it
> right now).
> While bitmap will be placed after qsbr_cnt[].
Yes, that is an option. Though, it would mean we have to calculate the address, similar to macro 'RTE_QSBR_CNT_ARRAY_ELM'
> As I understand register/unregister is not consider on critical path, so some
> perf-degradation here doesn't matter.
Yes
> Also check() would need extra address calculation for bitmap, but considering
> that we have to go through all bitmap (and in worst case qsbr_cnt[])
> anyway, that probably not a big deal?
I think the address calculation can be made simpler than what I had tried before. I can give it a shot.
>
> >
> > > As another thought - do we really need bitmap at all?
> > The bit map is helping avoid accessing all the elements in
> > rte_rcu_qsbr_cnt array (as you have mentioned below). This provides
> > the ability to scale the number of threads dynamically. For ex: an
> application can create a qsbr variable with 48 max threads, but currently only
> 2 threads are active (due to traffic conditions).
>
> I understand that bitmap supposed to speedup check() for situations when
> most threads are unregistered.
> My thought was that might be check() speedup for such situation is not that
> critical.
IMO, there is a need to address both the cases, considering the future direction of DPDK. It is possible to introduce a counter for the current number of threads registered. If that is same as maximum number of threads, then scanning the registered thread ID array can be skipped.
>
> >
> > > Might it is possible to sotre register value for each thread inside
> > > it's
> > > rte_rcu_qsbr_cnt:
> > > struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;}
> > > __rte_cache_aligned; ?
> > > That would cause check() to walk through all elems in
> > > rte_rcu_qsbr_cnt array, but from other side would help to avoid cache
> conflicts for register/unregister.
> > With the addition of rte_rcu_qsbr_thread_online/offline APIs, the
> > register/unregister APIs are not in critical path anymore. Hence, the cache
> conflicts are fine. The online/offline APIs work on thread specific cache lines
> and these are in the critical path.
> >
> > >
> > > > +} __rte_cache_aligned;
> > > > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] rcu: add RCU library supporting QSBR mechanism
2019-03-29 5:54 ` Honnappa Nagarahalli
@ 2019-03-29 5:54 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-29 5:54 UTC (permalink / raw)
To: Ananyev, Konstantin, stephen, paulmck, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd, nd
>
> > >
> > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define RTE_QSBR_CNT_INIT
> 1
> > > > +
> > > > +/**
> > > > + * RTE thread Quiescent State structure.
> > > > + * Quiescent state counter array (array of 'struct
> > > > +rte_rcu_qsbr_cnt'),
> > > > + * whose size is dependent on the maximum number of reader
> > > > +threads
> > > > + * (m_threads) using this variable is stored immediately
> > > > +following
> > > > + * this structure.
> > > > + */
> > > > +struct rte_rcu_qsbr {
> > > > + uint64_t token __rte_cache_aligned;
> > > > + /**< Counter to allow for multiple simultaneous QS queries */
> > > > +
> > > > + uint32_t num_elems __rte_cache_aligned;
> > > > + /**< Number of elements in the thread ID array */
> > > > + uint32_t m_threads;
> > > > + /**< Maximum number of threads this RCU variable will use */
> > > > +
> > > > + uint64_t reg_thread_id[RTE_QSBR_THRID_ARRAY_ELEMS]
> > > __rte_cache_aligned;
> > > > + /**< Registered thread IDs are stored in a bitmap array */
> > >
> > >
> > > As I understand you ended up with fixed size array to avoid 2
> > > variable size arrays in this struct?
> > Yes
> >
> > > Is that big penalty for register/unregister() to either store a
> > > pointer to bitmap, or calculate it based on num_elems value?
> > In the last RFC I sent out [1], I tested the impact of having
> > non-fixed size array. There 'was' a performance degradation in most of the
> performance tests. The issue was with calculating the address of per thread
> QSBR counters (not with the address calculation of the bitmap).
> > With the current patch, I do not see the performance difference (the
> > difference between the RFC and this patch are the memory orderings,
> > they are masking any perf gain from having a fixed array). However, I have
> kept the fixed size array as the generated code does not have additional
> calculations to get the address of qsbr counter array elements.
> >
> > [1] http://mails.dpdk.org/archives/dev/2019-February/125029.html
>
> Ok I see, but can we then arrange them ina different way:
> qsbr_cnt[] will start at the end of struct rte_rcu_qsbr (same as you have it
> right now).
> While bitmap will be placed after qsbr_cnt[].
Yes, that is an option. Though, it would mean we have to calculate the address, similar to macro 'RTE_QSBR_CNT_ARRAY_ELM'
> As I understand register/unregister is not consider on critical path, so some
> perf-degradation here doesn't matter.
Yes
> Also check() would need extra address calculation for bitmap, but considering
> that we have to go through all bitmap (and in worst case qsbr_cnt[])
> anyway, that probably not a big deal?
I think the address calculation can be made simpler than what I had tried before. I can give it a shot.
>
> >
> > > As another thought - do we really need bitmap at all?
> > The bit map is helping avoid accessing all the elements in
> > rte_rcu_qsbr_cnt array (as you have mentioned below). This provides
> > the ability to scale the number of threads dynamically. For ex: an
> application can create a qsbr variable with 48 max threads, but currently only
> 2 threads are active (due to traffic conditions).
>
> I understand that bitmap supposed to speedup check() for situations when
> most threads are unregistered.
> My thought was that might be check() speedup for such situation is not that
> critical.
IMO, there is a need to address both the cases, considering the future direction of DPDK. It is possible to introduce a counter for the current number of threads registered. If that is same as maximum number of threads, then scanning the registered thread ID array can be skipped.
>
> >
> > > Might it is possible to sotre register value for each thread inside
> > > it's
> > > rte_rcu_qsbr_cnt:
> > > struct rte_rcu_qsbr_cnt {uint64_t cnt; uint32_t register;}
> > > __rte_cache_aligned; ?
> > > That would cause check() to walk through all elems in
> > > rte_rcu_qsbr_cnt array, but from other side would help to avoid cache
> conflicts for register/unregister.
> > With the addition of rte_rcu_qsbr_thread_online/offline APIs, the
> > register/unregister APIs are not in critical path anymore. Hence, the cache
> conflicts are fine. The online/offline APIs work on thread specific cache lines
> and these are in the critical path.
> >
> > >
> > > > +} __rte_cache_aligned;
> > > > +
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (7 preceding siblings ...)
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-01 17:10 ` Honnappa Nagarahalli
2019-04-01 17:10 ` Honnappa Nagarahalli
` (3 more replies)
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (5 subsequent siblings)
14 siblings, 4 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:10 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 553 +++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3175 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-01 17:10 ` [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-01 17:10 ` Honnappa Nagarahalli
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 1/3] rcu: " Honnappa Nagarahalli
` (2 subsequent siblings)
3 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:10 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 553 +++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3175 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-01 17:10 ` [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-01 17:10 ` Honnappa Nagarahalli
@ 2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-02 10:22 ` Ananyev, Konstantin
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:11 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 +++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 553 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 844 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 452b8eb82..5827c1bbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1230,6 +1230,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 0b09a9348..d3557ff3c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -805,6 +805,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..b24a9363f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..53d08446a
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..3e8cd679e
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,553 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..3feb44b75 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..2de0b5fc6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-02 10:22 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:11 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 +++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 553 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 844 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 452b8eb82..5827c1bbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1230,6 +1230,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 0b09a9348..d3557ff3c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -805,6 +805,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..b24a9363f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..53d08446a
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..3e8cd679e
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,553 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..3feb44b75 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..2de0b5fc6 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -96,6 +96,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests
2019-04-01 17:10 ` [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-01 17:10 ` Honnappa Nagarahalli
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-02 10:55 ` Ananyev, Konstantin
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:11 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++++++++++
5 files changed, 1639 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 89949c2bb..6b6dfefc2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -213,6 +213,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 5f87bb94d..c26ec889c 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -694,6 +694,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 05e5ddeb0..4df8e337b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -107,6 +107,8 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -132,7 +134,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -171,6 +174,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'string_autotest',
@@ -236,6 +240,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
]
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..8156aa56a
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1004 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..a69f827d8
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-02 10:55 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:11 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++++++++++
5 files changed, 1639 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 89949c2bb..6b6dfefc2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -213,6 +213,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 5f87bb94d..c26ec889c 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -694,6 +694,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 05e5ddeb0..4df8e337b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -107,6 +107,8 @@ test_sources = files('commands.c',
'test_timer.c',
'test_timer_perf.c',
'test_timer_racecond.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -132,7 +134,8 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -171,6 +174,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'string_autotest',
@@ -236,6 +240,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
]
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..8156aa56a
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1004 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..a69f827d8
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] doc/rcu: add lib_rcu documentation
2019-04-01 17:10 ` [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
3 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:11 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++++
5 files changed, 692 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..5c1f6b477 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..0b4c248a2 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..6fb3fb921 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..dfe45fa62
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] doc/rcu: add lib_rcu documentation
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-01 17:11 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:11 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++++
5 files changed, 692 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..5c1f6b477 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..0b4c248a2 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..6fb3fb921 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..dfe45fa62
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 1/3] rcu: " Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
@ 2019-04-02 10:22 ` Ananyev, Konstantin
2019-04-02 10:22 ` Ananyev, Konstantin
2019-04-02 10:53 ` Ananyev, Konstantin
1 sibling, 2 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-02 10:22 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> ---
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-02 10:22 ` Ananyev, Konstantin
@ 2019-04-02 10:22 ` Ananyev, Konstantin
2019-04-02 10:53 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-02 10:22 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> ---
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-02 10:22 ` Ananyev, Konstantin
2019-04-02 10:22 ` Ananyev, Konstantin
@ 2019-04-02 10:53 ` Ananyev, Konstantin
2019-04-02 10:53 ` Ananyev, Konstantin
1 sibling, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-02 10:53 UTC (permalink / raw)
To: Ananyev, Konstantin, Honnappa Nagarahalli, stephen, paulmck,
Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
>
> > Add RCU library supporting quiescent state based memory reclamation method.
> > This library helps identify the quiescent state of the reader threads so
> > that the writers can free the memory associated with the lock less data
> > structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > ---
> > --
>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> > 2.17.1
Actually one small thing:
while doing make all seeing the following error after patch #2:
In file included from /local/kananye1/dpdk.rcu1/app/test/test_rcu_qsbr_perf.c:8:0:
/local/kananye1/dpdk.rcu1/x86_64-native-linuxapp-gcc-aesmb/include/rte_rcu_qsbr.h: In function ârte_rcu_qsbr_thread_onlineâ:
/local/kananye1/dpdk.rcu1/x86_64-native-linuxapp-gcc-aesmb/include/rte_rcu_qsbr.h:235:2: error: implicit declaration of function ârte_smp_mbâ [-Werror=implicit-function-declaration]
rte_smp_mb();
^~~~~~~~~~
Fixed by
--- a/lib/librte_rcu/rte_rcu_qsbr.h
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -31,6 +31,7 @@ extern "C" {
#include <rte_memory.h>
#include <rte_lcore.h>
#include <rte_debug.h>
+#include <rte_atomic.h>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-02 10:53 ` Ananyev, Konstantin
@ 2019-04-02 10:53 ` Ananyev, Konstantin
0 siblings, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-02 10:53 UTC (permalink / raw)
To: Ananyev, Konstantin, Honnappa Nagarahalli, stephen, paulmck,
Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="UTF-8", Size: 1336 bytes --]
>
> > Add RCU library supporting quiescent state based memory reclamation method.
> > This library helps identify the quiescent state of the reader threads so
> > that the writers can free the memory associated with the lock less data
> > structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > ---
> > --
>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> > 2.17.1
Actually one small thing:
while doing make all seeing the following error after patch #2:
In file included from /local/kananye1/dpdk.rcu1/app/test/test_rcu_qsbr_perf.c:8:0:
/local/kananye1/dpdk.rcu1/x86_64-native-linuxapp-gcc-aesmb/include/rte_rcu_qsbr.h: In function ârte_rcu_qsbr_thread_onlineâ:
/local/kananye1/dpdk.rcu1/x86_64-native-linuxapp-gcc-aesmb/include/rte_rcu_qsbr.h:235:2: error: implicit declaration of function ârte_smp_mbâ [-Werror=implicit-function-declaration]
rte_smp_mb();
^~~~~~~~~~
Fixed by
--- a/lib/librte_rcu/rte_rcu_qsbr.h
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -31,6 +31,7 @@ extern "C" {
#include <rte_memory.h>
#include <rte_lcore.h>
#include <rte_debug.h>
+#include <rte_atomic.h>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
@ 2019-04-02 10:55 ` Ananyev, Konstantin
2019-04-02 10:55 ` Ananyev, Konstantin
1 sibling, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-02 10:55 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:honnappa.nagarahalli@arm.com]
> Sent: Monday, April 1, 2019 6:11 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com; dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests
>
> From: Dharmik Thakkar <dharmik.thakkar@arm.com>
>
> Add API positive/negative test cases, functional tests and
> performance tests.
>
> Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> ---
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests
2019-04-02 10:55 ` Ananyev, Konstantin
@ 2019-04-02 10:55 ` Ananyev, Konstantin
0 siblings, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-02 10:55 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:honnappa.nagarahalli@arm.com]
> Sent: Monday, April 1, 2019 6:11 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com; dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests
>
> From: Dharmik Thakkar <dharmik.thakkar@arm.com>
>
> Add API positive/negative test cases, functional tests and
> performance tests.
>
> Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> ---
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (8 preceding siblings ...)
2019-04-01 17:10 ` [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
` (3 more replies)
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (4 subsequent siblings)
14 siblings, 4 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 554 +++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3175 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 1/3] rcu: " Honnappa Nagarahalli
` (2 subsequent siblings)
3 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 554 +++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3175 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 18:14 ` Paul E. McKenney
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 554 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 845 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 9774344dd..6e9766eed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1267,6 +1267,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 8da08105b..ad70c79e1 100644
--- a/config/common_base
+++ b/config/common_base
@@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..53d08446a
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..ff696aeab
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,554 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 7d994bece..e93cc366d 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 18:14 ` Paul E. McKenney
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 554 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 845 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 9774344dd..6e9766eed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1267,6 +1267,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 8da08105b..ad70c79e1 100644
--- a/config/common_base
+++ b/config/common_base
@@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..53d08446a
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..ff696aeab
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,554 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 7d994bece..e93cc366d 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 15:26 ` Stephen Hemminger
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++++++++++
5 files changed, 1638 insertions(+)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index db2527489..5f259e838 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..1a2ee18a5 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -137,6 +139,7 @@ test_deps = ['acl',
'ring',
'stack',
'timer'
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..8156aa56a
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1004 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..a69f827d8
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 15:26 ` Stephen Hemminger
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1004 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 615 ++++++++++++++++++++
5 files changed, 1638 insertions(+)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index db2527489..5f259e838 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..1a2ee18a5 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -137,6 +139,7 @@ test_deps = ['acl',
'ring',
'stack',
'timer'
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..8156aa56a
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1004 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ writer_done = 0;
+ uint8_t test_cores;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..a69f827d8
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,615 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t lcore_id = rte_lcore_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i;
+ int32_t pos;
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Blocking QSBR Check\n", num_cores);
+
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret;
+ int32_t pos;
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, "
+ "Non-Blocking QSBR check\n", num_cores);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(register/update/unregister): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ free_rcu();
+
+ return 0;
+
+test_fail:
+ free_rcu();
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] doc/rcu: add lib_rcu documentation
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
3 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++++
5 files changed, 692 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..dfe45fa62
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] doc/rcu: add lib_rcu documentation
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-10 11:20 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 11:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 179 ++++++
5 files changed, 692 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..dfe45fa62
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,179 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
@ 2019-04-10 15:26 ` Stephen Hemminger
2019-04-10 15:26 ` Stephen Hemminger
2019-04-10 16:15 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-10 15:26 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Wed, 10 Apr 2019 06:20:05 -0500
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> From: Dharmik Thakkar <dharmik.thakkar@arm.com>
>
> Add API positive/negative test cases, functional tests and
> performance tests.
>
> Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Could you add (or modify existing) l2/l3 fwd examples to demonstrate how
this would be used. Having just documentation and test code is probably not
enough to spur adoption.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
2019-04-10 15:26 ` Stephen Hemminger
@ 2019-04-10 15:26 ` Stephen Hemminger
2019-04-10 16:15 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-10 15:26 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Wed, 10 Apr 2019 06:20:05 -0500
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> From: Dharmik Thakkar <dharmik.thakkar@arm.com>
>
> Add API positive/negative test cases, functional tests and
> performance tests.
>
> Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Could you add (or modify existing) l2/l3 fwd examples to demonstrate how
this would be used. Having just documentation and test code is probably not
enough to spur adoption.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
2019-04-10 15:26 ` Stephen Hemminger
2019-04-10 15:26 ` Stephen Hemminger
@ 2019-04-10 16:15 ` Honnappa Nagarahalli
2019-04-10 16:15 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 16:15 UTC (permalink / raw)
To: Stephen Hemminger
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
> Subject: Re: [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
>
> On Wed, 10 Apr 2019 06:20:05 -0500
> Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
>
> > From: Dharmik Thakkar <dharmik.thakkar@arm.com>
> >
> > Add API positive/negative test cases, functional tests and performance
> > tests.
> >
> > Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Could you add (or modify existing) l2/l3 fwd examples to demonstrate
> how this would be used. Having just documentation and test code is
> probably not enough to spur adoption.
The existing examples have static configuration. They do not delete (or add) any flows dynamically. I can show how the code looks like from the data plane perspective, but memory reclamation part cannot be demonstrated.
But, if we are ok to add more code to these applications, dynamic flow add/delete can be done.
Any thoughts on adding a new sample application?
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
2019-04-10 16:15 ` Honnappa Nagarahalli
@ 2019-04-10 16:15 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-10 16:15 UTC (permalink / raw)
To: Stephen Hemminger
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
> Subject: Re: [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests
>
> On Wed, 10 Apr 2019 06:20:05 -0500
> Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
>
> > From: Dharmik Thakkar <dharmik.thakkar@arm.com>
> >
> > Add API positive/negative test cases, functional tests and performance
> > tests.
> >
> > Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Could you add (or modify existing) l2/l3 fwd examples to demonstrate
> how this would be used. Having just documentation and test code is
> probably not enough to spur adoption.
The existing examples have static configuration. They do not delete (or add) any flows dynamically. I can show how the code looks like from the data plane perspective, but memory reclamation part cannot be demonstrated.
But, if we are ok to add more code to these applications, dynamic flow add/delete can be done.
Any thoughts on adding a new sample application?
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 1/3] rcu: " Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
@ 2019-04-10 18:14 ` Paul E. McKenney
2019-04-10 18:14 ` Paul E. McKenney
2019-04-11 4:35 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-10 18:14 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
I don't see any sign of read-side markers (rcu_read_lock() and
rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
Yes, strictly speaking, these are not needed for QSBR to operate, but they
make it way easier to maintain and debug code using RCU. For example,
given the read-side markers, you can check for errors like having a call
to rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
Without those read-side markers, life can be quite hard and you will
really hate yourself for failing to have provided them.
Some additional questions and comments interspersed.
Thanx, Paul
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> MAINTAINERS | 5 +
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 ++
> lib/librte_rcu/meson.build | 5 +
> lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
> lib/librte_rcu/rte_rcu_qsbr.h | 554 +++++++++++++++++++++++++++++
> lib/librte_rcu/rte_rcu_version.map | 11 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 10 files changed, 845 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9774344dd..6e9766eed 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1267,6 +1267,11 @@ F: examples/bpf/
> F: app/test/test_bpf.c
> F: doc/guides/prog_guide/bpf_lib.rst
>
> +RCU - EXPERIMENTAL
> +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> +F: lib/librte_rcu/
> +F: doc/guides/prog_guide/rcu_lib.rst
> +
>
> Test Applications
> -----------------
> diff --git a/config/common_base b/config/common_base
> index 8da08105b..ad70c79e1 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
> #
> CONFIG_RTE_LIBRTE_TELEMETRY=n
>
> +#
> +# Compile librte_rcu
> +#
> +CONFIG_RTE_LIBRTE_RCU=y
> +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> +
> #
> # Compile librte_lpm
> #
> diff --git a/lib/Makefile b/lib/Makefile
> index 26021d0c0..791e0d991 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
> +DEPDIRS-librte_rcu := librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
> diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
> new file mode 100644
> index 000000000..6aa677bd1
> --- /dev/null
> +++ b/lib/librte_rcu/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_rcu.a
> +
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> +LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_rcu_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> new file mode 100644
> index 000000000..c009ae4b7
> --- /dev/null
> +++ b/lib/librte_rcu/meson.build
> @@ -0,0 +1,5 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +sources = files('rte_rcu_qsbr.c')
> +headers = files('rte_rcu_qsbr.h')
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..53d08446a
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,237 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (max_threads == 0) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid max_threads %u\n",
> + __func__, max_threads);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + /* Add the size of the registered thread ID bitmap array */
> + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> +
> + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
Given that you align here, should you also align in the earlier steps
in the computation of sz?
> +}
> +
> +/* Initialize a quiescent state variable */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (v == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = rte_rcu_qsbr_get_memsize(max_threads);
> + if (sz == 1)
> + return 1;
> +
> + /* Set all the threads to offline */
> + memset(v, 0, sz);
We calculate sz here, but it looks like the caller must also calculate it
in order to correctly allocate the memory referenced by the "v" argument
to this function, with bad things happening if the two calculations get
different results. Should "v" instead be allocated within this function
to avoid this sort of problem?
> + v->max_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +
> + return 0;
> +}
> +
> +/* Register a reader thread to report its quiescent state
> + * on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already registered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & 1UL << id)
> + return 0;
> +
> + do {
> + new_bmap = old_bmap | (1UL << id);
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_add(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & (1UL << id))
> + /* Someone else registered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
This would be simpler if threads were required to register themselves.
Maybe you have use cases requiring registration of other threads, but
this capability is adding significant complexity, so it might be worth
some thought.
> + return 0;
> +}
> +
> +/* Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
Ditto!
> + return 0;
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t;
> +
> + if (v == NULL || f == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->max_threads));
> + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> + fprintf(f, " Current # threads = %u\n", v->num_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE);
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu\n", t,
> + __atomic_load_n(
> + &v->qsbr_cnt[i].cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +
> + return 0;
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..ff696aeab
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,554 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +#include <rte_atomic.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
> +#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/* RTE Quiescent State variable structure.
> + * This structure has two elements that vary in size based on the
> + * 'max_threads' parameter.
> + * 1) Quiescent state counter array
> + * 2) Register thread ID array
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple concurrent quiescent state queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t num_threads;
> + /**< Number of threads currently using this QS variable */
> + uint32_t max_threads;
> + /**< Maximum number of threads using this QS variable */
> +
> + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> + /**< Quiescent state counter array of 'max_threads' elements */
> +
> + /**< Registered thread IDs are stored in a bitmap array,
> + * after the quiescent state counter array.
> + */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * On success - size of memory in bytes required for this QS variable.
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0 or 'v' is NULL.
> + *
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable. thread_id is a value between 0 and (max_threads - 1).
> + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing quiescent state queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_quiescent. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its quiescent state.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
I am not clear on why this function should be inline. Or do you have use
cases where threads go offline and come back online extremely frequently?
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> +#ifdef RTE_ARCH_X86_64
> + /* rte_smp_mb() for x86 is lighter */
> + rte_smp_mb();
> +#else
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a registered reader thread from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This can be called during initialization or as part of the packet
> + * processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * rte_rcu_qsbr_check API will not wait for the reader thread with
> + * this thread ID to report its quiescent state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
Same here on inlining.
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* The reader can go offline only after the load of the
> + * data structure is completed. i.e. any load of the
> + * data strcture can not move after this store.
> + */
> +
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Ask the reader threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from worker threads.
> + *
> + * @param v
> + * QS variable
> + * @return
> + * - This is the token for this call of the API. This should be
> + * passed to rte_rcu_qsbr_check API.
> + */
> +static __rte_always_inline uint64_t __rte_experimental
> +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + /* Release the changes to the shared data structure.
> + * This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> +
> + return t;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for a reader thread.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the reader threads registered to report their quiescent state
> + * on the QS variable must call this API.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Update the quiescent state for the reader with this thread ID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Acquire the changes to the shared data structure released
> + * by rte_rcu_qsbr_start.
> + * Later loads of the shared data structure should not move
> + * above this load. Hence, use load-acquire.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> +
> + /* Inform the writer that updates are visible to this reader.
> + * Prior loads of the shared data structure should not move
> + * beyond this store. Hence use store-release.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELEASE);
> +
> + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> + __func__, t, thread_id);
> +}
> +
> +/* Check the quiescent state counter for registered threads only, assuming
> + * that not all threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i, j, id;
> + uint64_t bmap;
> + uint64_t c;
> + uint64_t *reg_thread_id;
> +
> + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> + i < v->num_elems;
> + i++, reg_thread_id++) {
> + /* Load the current registered thread bit map before
> + * loading the reader thread quiescent state counters.
> + */
> + bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + while (bmap) {
> + j = __builtin_ctzl(bmap);
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
> + __func__, t, wait, bmap, id + j);
> + c = __atomic_load_n(
> + &v->qsbr_cnt[id + j].cnt,
> + __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, id+j);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
This assumes that a 64-bit counter won't overflow, which is close enough
to true given current CPU clock frequencies. ;-)
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + /* This thread might have unregistered.
> + * Re-read the bitmap.
> + */
> + bmap = __atomic_load_n(reg_thread_id,
> + __ATOMIC_ACQUIRE);
> +
> + continue;
> + }
> +
> + bmap &= ~(1UL << j);
> + }
> + }
> +
> + return 1;
> +}
> +
> +/* Check the quiescent state counter for all threads, assuming that
> + * all the threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
Does checking the bitmap really take long enough to make this worthwhile
as a separate function? I would think that the bitmap-checking time
would be lost in the noise of cache misses from the ->cnt loads.
Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in
the absence of readers, you might see __rcu_qsbr_check_all() being a
bit faster. But is that really what DPDK does?
> +{
> + uint32_t i;
> + struct rte_rcu_qsbr_cnt *cnt;
> + uint64_t c;
> +
> + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> + __func__, t, wait, i);
> + while (1) {
> + c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, i);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
> + break;
> +
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + }
> + }
> +
> + return 1;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the reader threads have entered the quiescent state
> + * referenced by token.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * If this API is called with 'wait' set to true, the following
> + * factors must be considered:
> + *
> + * 1) If the calling thread is also reporting the status on the
> + * same QS variable, it must update the quiescent state status, before
> + * calling this API.
> + *
> + * 2) In addition, while calling from multiple threads, only
> + * one of those threads can be reporting the quiescent state status
> + * on a given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param t
> + * Token returned by rte_rcu_qsbr_start API
> + * @param wait
> + * If true, block till all the reader threads have completed entering
> + * the quiescent state referenced by token 't'.
> + * @return
> + * - 0 if all reader threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all reader threads have passed through specified number
> + * of quiescent states.
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + RTE_ASSERT(v != NULL);
> +
> + if (likely(v->num_threads == v->max_threads))
> + return __rcu_qsbr_check_all(v, t, wait);
> + else
> + return __rcu_qsbr_check_selective(v, t, wait);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Wait till the reader threads have entered quiescent state.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
> + * rte_rcu_qsbr_check APIs.
> + *
> + * If this API is called from multiple threads, only one of
> + * those threads can be reporting the quiescent state status on a
> + * given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Thread ID of the caller if it is registered to report quiescent state
> + * on this QS variable (i.e. the calling thread is also part of the
> + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + t = rte_rcu_qsbr_start(v);
> +
> + /* If the current thread has readside critical section,
> + * update its quiescent state status.
> + */
> + if (thread_id != RTE_QSBR_THRID_INVALID)
> + rte_rcu_qsbr_quiescent(v, thread_id);
> +
> + /* Wait for other readers to enter quiescent state */
> + rte_rcu_qsbr_check(v, t, true);
And you are presumably relying on 64-bit counters to avoid the need to
execute the above code twice in succession. Which again works given
current CPU clock rates combined with system and human lifespans.
Otherwise, there are interesting race conditions that can happen, so
don't try this with a 32-bit counter!!!
(But think of the great^N grandchildren!!!)
More seriously, a comment warning people not to make the counter be 32
bits is in order.
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Dump the details of a single QS variables to a file.
> + *
> + * It is NOT multi-thread safe.
> + *
> + * @param f
> + * A pointer to a file for output
> + * @param v
> + * QS variable
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - NULL parameters are passed
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_RCU_QSBR_H_ */
> diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
> new file mode 100644
> index 000000000..ad8cb517c
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_version.map
> @@ -0,0 +1,11 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_rcu_qsbr_get_memsize;
> + rte_rcu_qsbr_init;
> + rte_rcu_qsbr_thread_register;
> + rte_rcu_qsbr_thread_unregister;
> + rte_rcu_qsbr_dump;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build
> index 595314d7d..67be10659 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'stack', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index 7d994bece..e93cc366d 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-10 18:14 ` Paul E. McKenney
@ 2019-04-10 18:14 ` Paul E. McKenney
2019-04-11 4:35 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-10 18:14 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
I don't see any sign of read-side markers (rcu_read_lock() and
rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
Yes, strictly speaking, these are not needed for QSBR to operate, but they
make it way easier to maintain and debug code using RCU. For example,
given the read-side markers, you can check for errors like having a call
to rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
Without those read-side markers, life can be quite hard and you will
really hate yourself for failing to have provided them.
Some additional questions and comments interspersed.
Thanx, Paul
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> MAINTAINERS | 5 +
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 ++
> lib/librte_rcu/meson.build | 5 +
> lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
> lib/librte_rcu/rte_rcu_qsbr.h | 554 +++++++++++++++++++++++++++++
> lib/librte_rcu/rte_rcu_version.map | 11 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 10 files changed, 845 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9774344dd..6e9766eed 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1267,6 +1267,11 @@ F: examples/bpf/
> F: app/test/test_bpf.c
> F: doc/guides/prog_guide/bpf_lib.rst
>
> +RCU - EXPERIMENTAL
> +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> +F: lib/librte_rcu/
> +F: doc/guides/prog_guide/rcu_lib.rst
> +
>
> Test Applications
> -----------------
> diff --git a/config/common_base b/config/common_base
> index 8da08105b..ad70c79e1 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
> #
> CONFIG_RTE_LIBRTE_TELEMETRY=n
>
> +#
> +# Compile librte_rcu
> +#
> +CONFIG_RTE_LIBRTE_RCU=y
> +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> +
> #
> # Compile librte_lpm
> #
> diff --git a/lib/Makefile b/lib/Makefile
> index 26021d0c0..791e0d991 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
> +DEPDIRS-librte_rcu := librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
> diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
> new file mode 100644
> index 000000000..6aa677bd1
> --- /dev/null
> +++ b/lib/librte_rcu/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_rcu.a
> +
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> +LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_rcu_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> new file mode 100644
> index 000000000..c009ae4b7
> --- /dev/null
> +++ b/lib/librte_rcu/meson.build
> @@ -0,0 +1,5 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +sources = files('rte_rcu_qsbr.c')
> +headers = files('rte_rcu_qsbr.h')
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..53d08446a
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,237 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (max_threads == 0) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid max_threads %u\n",
> + __func__, max_threads);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + /* Add the size of the registered thread ID bitmap array */
> + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> +
> + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
Given that you align here, should you also align in the earlier steps
in the computation of sz?
> +}
> +
> +/* Initialize a quiescent state variable */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (v == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = rte_rcu_qsbr_get_memsize(max_threads);
> + if (sz == 1)
> + return 1;
> +
> + /* Set all the threads to offline */
> + memset(v, 0, sz);
We calculate sz here, but it looks like the caller must also calculate it
in order to correctly allocate the memory referenced by the "v" argument
to this function, with bad things happening if the two calculations get
different results. Should "v" instead be allocated within this function
to avoid this sort of problem?
> + v->max_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +
> + return 0;
> +}
> +
> +/* Register a reader thread to report its quiescent state
> + * on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already registered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & 1UL << id)
> + return 0;
> +
> + do {
> + new_bmap = old_bmap | (1UL << id);
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_add(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & (1UL << id))
> + /* Someone else registered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
This would be simpler if threads were required to register themselves.
Maybe you have use cases requiring registration of other threads, but
this capability is adding significant complexity, so it might be worth
some thought.
> + return 0;
> +}
> +
> +/* Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
Ditto!
> + return 0;
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t;
> +
> + if (v == NULL || f == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->max_threads));
> + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> + fprintf(f, " Current # threads = %u\n", v->num_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE);
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu\n", t,
> + __atomic_load_n(
> + &v->qsbr_cnt[i].cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +
> + return 0;
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..ff696aeab
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,554 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +#include <rte_atomic.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
> +#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/* RTE Quiescent State variable structure.
> + * This structure has two elements that vary in size based on the
> + * 'max_threads' parameter.
> + * 1) Quiescent state counter array
> + * 2) Register thread ID array
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple concurrent quiescent state queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t num_threads;
> + /**< Number of threads currently using this QS variable */
> + uint32_t max_threads;
> + /**< Maximum number of threads using this QS variable */
> +
> + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> + /**< Quiescent state counter array of 'max_threads' elements */
> +
> + /**< Registered thread IDs are stored in a bitmap array,
> + * after the quiescent state counter array.
> + */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * On success - size of memory in bytes required for this QS variable.
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0 or 'v' is NULL.
> + *
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable. thread_id is a value between 0 and (max_threads - 1).
> + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing quiescent state queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_quiescent. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its quiescent state.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
I am not clear on why this function should be inline. Or do you have use
cases where threads go offline and come back online extremely frequently?
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> +#ifdef RTE_ARCH_X86_64
> + /* rte_smp_mb() for x86 is lighter */
> + rte_smp_mb();
> +#else
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a registered reader thread from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This can be called during initialization or as part of the packet
> + * processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * rte_rcu_qsbr_check API will not wait for the reader thread with
> + * this thread ID to report its quiescent state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
Same here on inlining.
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* The reader can go offline only after the load of the
> + * data structure is completed. i.e. any load of the
> + * data strcture can not move after this store.
> + */
> +
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Ask the reader threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from worker threads.
> + *
> + * @param v
> + * QS variable
> + * @return
> + * - This is the token for this call of the API. This should be
> + * passed to rte_rcu_qsbr_check API.
> + */
> +static __rte_always_inline uint64_t __rte_experimental
> +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + /* Release the changes to the shared data structure.
> + * This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> +
> + return t;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for a reader thread.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the reader threads registered to report their quiescent state
> + * on the QS variable must call this API.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Update the quiescent state for the reader with this thread ID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Acquire the changes to the shared data structure released
> + * by rte_rcu_qsbr_start.
> + * Later loads of the shared data structure should not move
> + * above this load. Hence, use load-acquire.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> +
> + /* Inform the writer that updates are visible to this reader.
> + * Prior loads of the shared data structure should not move
> + * beyond this store. Hence use store-release.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELEASE);
> +
> + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> + __func__, t, thread_id);
> +}
> +
> +/* Check the quiescent state counter for registered threads only, assuming
> + * that not all threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i, j, id;
> + uint64_t bmap;
> + uint64_t c;
> + uint64_t *reg_thread_id;
> +
> + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> + i < v->num_elems;
> + i++, reg_thread_id++) {
> + /* Load the current registered thread bit map before
> + * loading the reader thread quiescent state counters.
> + */
> + bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + while (bmap) {
> + j = __builtin_ctzl(bmap);
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
> + __func__, t, wait, bmap, id + j);
> + c = __atomic_load_n(
> + &v->qsbr_cnt[id + j].cnt,
> + __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, id+j);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
This assumes that a 64-bit counter won't overflow, which is close enough
to true given current CPU clock frequencies. ;-)
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + /* This thread might have unregistered.
> + * Re-read the bitmap.
> + */
> + bmap = __atomic_load_n(reg_thread_id,
> + __ATOMIC_ACQUIRE);
> +
> + continue;
> + }
> +
> + bmap &= ~(1UL << j);
> + }
> + }
> +
> + return 1;
> +}
> +
> +/* Check the quiescent state counter for all threads, assuming that
> + * all the threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
Does checking the bitmap really take long enough to make this worthwhile
as a separate function? I would think that the bitmap-checking time
would be lost in the noise of cache misses from the ->cnt loads.
Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in
the absence of readers, you might see __rcu_qsbr_check_all() being a
bit faster. But is that really what DPDK does?
> +{
> + uint32_t i;
> + struct rte_rcu_qsbr_cnt *cnt;
> + uint64_t c;
> +
> + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> + __func__, t, wait, i);
> + while (1) {
> + c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, i);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
> + break;
> +
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + }
> + }
> +
> + return 1;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the reader threads have entered the quiescent state
> + * referenced by token.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * If this API is called with 'wait' set to true, the following
> + * factors must be considered:
> + *
> + * 1) If the calling thread is also reporting the status on the
> + * same QS variable, it must update the quiescent state status, before
> + * calling this API.
> + *
> + * 2) In addition, while calling from multiple threads, only
> + * one of those threads can be reporting the quiescent state status
> + * on a given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param t
> + * Token returned by rte_rcu_qsbr_start API
> + * @param wait
> + * If true, block till all the reader threads have completed entering
> + * the quiescent state referenced by token 't'.
> + * @return
> + * - 0 if all reader threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all reader threads have passed through specified number
> + * of quiescent states.
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + RTE_ASSERT(v != NULL);
> +
> + if (likely(v->num_threads == v->max_threads))
> + return __rcu_qsbr_check_all(v, t, wait);
> + else
> + return __rcu_qsbr_check_selective(v, t, wait);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Wait till the reader threads have entered quiescent state.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
> + * rte_rcu_qsbr_check APIs.
> + *
> + * If this API is called from multiple threads, only one of
> + * those threads can be reporting the quiescent state status on a
> + * given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Thread ID of the caller if it is registered to report quiescent state
> + * on this QS variable (i.e. the calling thread is also part of the
> + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + t = rte_rcu_qsbr_start(v);
> +
> + /* If the current thread has readside critical section,
> + * update its quiescent state status.
> + */
> + if (thread_id != RTE_QSBR_THRID_INVALID)
> + rte_rcu_qsbr_quiescent(v, thread_id);
> +
> + /* Wait for other readers to enter quiescent state */
> + rte_rcu_qsbr_check(v, t, true);
And you are presumably relying on 64-bit counters to avoid the need to
execute the above code twice in succession. Which again works given
current CPU clock rates combined with system and human lifespans.
Otherwise, there are interesting race conditions that can happen, so
don't try this with a 32-bit counter!!!
(But think of the great^N grandchildren!!!)
More seriously, a comment warning people not to make the counter be 32
bits is in order.
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Dump the details of a single QS variables to a file.
> + *
> + * It is NOT multi-thread safe.
> + *
> + * @param f
> + * A pointer to a file for output
> + * @param v
> + * QS variable
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - NULL parameters are passed
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_RCU_QSBR_H_ */
> diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
> new file mode 100644
> index 000000000..ad8cb517c
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_version.map
> @@ -0,0 +1,11 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_rcu_qsbr_get_memsize;
> + rte_rcu_qsbr_init;
> + rte_rcu_qsbr_thread_register;
> + rte_rcu_qsbr_thread_unregister;
> + rte_rcu_qsbr_dump;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build
> index 595314d7d..67be10659 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'stack', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index 7d994bece..e93cc366d 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-10 18:14 ` Paul E. McKenney
2019-04-10 18:14 ` Paul E. McKenney
@ 2019-04-11 4:35 ` Honnappa Nagarahalli
2019-04-11 4:35 ` Honnappa Nagarahalli
2019-04-11 15:26 ` Paul E. McKenney
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-11 4:35 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
Hi Paul,
Thank you for your feedback.
> -----Original Message-----
> From: Paul E. McKenney <paulmck@linux.ibm.com>
> Sent: Wednesday, April 10, 2019 1:15 PM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> marko.kovacevic@intel.com; dev@dpdk.org; Gavin Hu (Arm Technology
> China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: Re: [PATCH v4 1/3] rcu: add RCU library supporting QSBR
> mechanism
>
> On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli wrote:
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
>
> I don't see any sign of read-side markers (rcu_read_lock() and
> rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
>
> Yes, strictly speaking, these are not needed for QSBR to operate, but they
These APIs would be empty for QSBR.
> make it way easier to maintain and debug code using RCU. For example,
> given the read-side markers, you can check for errors like having a call to
> rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> Without those read-side markers, life can be quite hard and you will really
> hate yourself for failing to have provided them.
Want to make sure I understood this, do you mean the application would mark before and after accessing the shared data structure on the reader side?
rte_rcu_qsbr_lock()
<begin access shared data structure>
...
...
<end access shared data structure>
rte_rcu_qsbr_unlock()
If someone is debugging this code, they have to make sure that there is an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in between.
It sounds good to me. Obviously, they will not add any additional cycles as well.
Please let me know if my understanding is correct.
>
> Some additional questions and comments interspersed.
>
> Thanx, Paul
>
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> > MAINTAINERS | 5 +
> > config/common_base | 6 +
> > lib/Makefile | 2 +
> > lib/librte_rcu/Makefile | 23 ++
> > lib/librte_rcu/meson.build | 5 +
> > lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
> > lib/librte_rcu/rte_rcu_qsbr.h | 554
> +++++++++++++++++++++++++++++
> > lib/librte_rcu/rte_rcu_version.map | 11 +
> > lib/meson.build | 2 +-
> > mk/rte.app.mk | 1 +
> > 10 files changed, 845 insertions(+), 1 deletion(-) create mode
> > 100644 lib/librte_rcu/Makefile create mode 100644
> > lib/librte_rcu/meson.build create mode 100644
> > lib/librte_rcu/rte_rcu_qsbr.c create mode 100644
> > lib/librte_rcu/rte_rcu_qsbr.h create mode 100644
> > lib/librte_rcu/rte_rcu_version.map
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS index 9774344dd..6e9766eed
> > 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -1267,6 +1267,11 @@ F: examples/bpf/
> > F: app/test/test_bpf.c
> > F: doc/guides/prog_guide/bpf_lib.rst
> >
> > +RCU - EXPERIMENTAL
> > +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > +F: lib/librte_rcu/
> > +F: doc/guides/prog_guide/rcu_lib.rst
> > +
> >
> > Test Applications
> > -----------------
> > diff --git a/config/common_base b/config/common_base index
> > 8da08105b..ad70c79e1 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y #
> > CONFIG_RTE_LIBRTE_TELEMETRY=n
> >
> > +#
> > +# Compile librte_rcu
> > +#
> > +CONFIG_RTE_LIBRTE_RCU=y
> > +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> > +
> > #
> > # Compile librte_lpm
> > #
> > diff --git a/lib/Makefile b/lib/Makefile index 26021d0c0..791e0d991
> > 100644
> > --- a/lib/Makefile
> > +++ b/lib/Makefile
> > @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) +=
> librte_ipsec
> > DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev
> > librte_security
> > DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> > DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> > +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu DEPDIRS-librte_rcu :=
> > +librte_eal
> >
> > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git
> > a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile new file mode
> > 100644 index 000000000..6aa677bd1
> > --- /dev/null
> > +++ b/lib/librte_rcu/Makefile
> > @@ -0,0 +1,23 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > +Limited
> > +
> > +include $(RTE_SDK)/mk/rte.vars.mk
> > +
> > +# library name
> > +LIB = librte_rcu.a
> > +
> > +CFLAGS += -DALLOW_EXPERIMENTAL_API
> > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 LDLIBS += -lrte_eal
> > +
> > +EXPORT_MAP := rte_rcu_version.map
> > +
> > +LIBABIVER := 1
> > +
> > +# all source are stored in SRCS-y
> > +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> > +
> > +# install includes
> > +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> > +
> > +include $(RTE_SDK)/mk/rte.lib.mk
> > diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> > new file mode 100644 index 000000000..c009ae4b7
> > --- /dev/null
> > +++ b/lib/librte_rcu/meson.build
> > @@ -0,0 +1,5 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > +Limited
> > +
> > +sources = files('rte_rcu_qsbr.c')
> > +headers = files('rte_rcu_qsbr.h')
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.c
> > b/lib/librte_rcu/rte_rcu_qsbr.c new file mode 100644 index
> > 000000000..53d08446a
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> > @@ -0,0 +1,237 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + *
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#include <stdio.h>
> > +#include <string.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_log.h>
> > +#include <rte_memory.h>
> > +#include <rte_malloc.h>
> > +#include <rte_eal.h>
> > +#include <rte_eal_memconfig.h>
> > +#include <rte_atomic.h>
> > +#include <rte_per_lcore.h>
> > +#include <rte_lcore.h>
> > +#include <rte_errno.h>
> > +
> > +#include "rte_rcu_qsbr.h"
> > +
> > +/* Get the memory size of QSBR variable */ size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads) {
> > + size_t sz;
> > +
> > + if (max_threads == 0) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid max_threads %u\n",
> > + __func__, max_threads);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + sz = sizeof(struct rte_rcu_qsbr);
> > +
> > + /* Add the size of quiescent state counter array */
> > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > +
> > + /* Add the size of the registered thread ID bitmap array */
> > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > +
> > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
>
> Given that you align here, should you also align in the earlier steps in the
> computation of sz?
Agree. I will remove the align here and keep the earlier one as the intent is to align the thread ID array.
>
> > +}
> > +
> > +/* Initialize a quiescent state variable */ int __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) {
> > + size_t sz;
> > +
> > + if (v == NULL) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > + if (sz == 1)
> > + return 1;
> > +
> > + /* Set all the threads to offline */
> > + memset(v, 0, sz);
>
> We calculate sz here, but it looks like the caller must also calculate it in
> order to correctly allocate the memory referenced by the "v" argument to
> this function, with bad things happening if the two calculations get
> different results. Should "v" instead be allocated within this function to
> avoid this sort of problem?
Earlier version allocated the memory with-in this library. However, it was decided to go with the current implementation as it provides flexibility for the application to manage the memory as it sees fit. For ex: it could allocate this as part of another structure in a single allocation. This also falls inline with similar approach taken in other libraries.
>
> > + v->max_threads = max_threads;
> > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > + v->token = RTE_QSBR_CNT_INIT;
> > +
> > + return 0;
> > +}
> > +
> > +/* Register a reader thread to report its quiescent state
> > + * on a QS variable.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id, success;
> > + uint64_t old_bmap, new_bmap;
> > +
> > + if (v == NULL || thread_id >= v->max_threads) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Make sure that the counter for registered threads does not
> > + * go out of sync. Hence, additional checks are required.
> > + */
> > + /* Check if the thread is already registered */
> > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + __ATOMIC_RELAXED);
> > + if (old_bmap & 1UL << id)
> > + return 0;
> > +
> > + do {
> > + new_bmap = old_bmap | (1UL << id);
> > + success = __atomic_compare_exchange(
> > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + &old_bmap, &new_bmap, 0,
> > + __ATOMIC_RELEASE,
> __ATOMIC_RELAXED);
> > +
> > + if (success)
> > + __atomic_fetch_add(&v->num_threads,
> > + 1, __ATOMIC_RELAXED);
> > + else if (old_bmap & (1UL << id))
> > + /* Someone else registered this thread.
> > + * Counter should not be incremented.
> > + */
> > + return 0;
> > + } while (success == 0);
>
> This would be simpler if threads were required to register themselves.
> Maybe you have use cases requiring registration of other threads, but this
> capability is adding significant complexity, so it might be worth some
> thought.
>
It was simple earlier (__atomic_fetch_or). The complexity is added as 'num_threads' should not go out of sync.
> > + return 0;
> > +}
> > +
> > +/* Remove a reader thread, from the list of threads reporting their
> > + * quiescent state on a QS variable.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id, success;
> > + uint64_t old_bmap, new_bmap;
> > +
> > + if (v == NULL || thread_id >= v->max_threads) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Make sure that the counter for registered threads does not
> > + * go out of sync. Hence, additional checks are required.
> > + */
> > + /* Check if the thread is already unregistered */
> > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + __ATOMIC_RELAXED);
> > + if (old_bmap & ~(1UL << id))
> > + return 0;
> > +
> > + do {
> > + new_bmap = old_bmap & ~(1UL << id);
> > + /* Make sure any loads of the shared data structure are
> > + * completed before removal of the thread from the list of
> > + * reporting threads.
> > + */
> > + success = __atomic_compare_exchange(
> > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + &old_bmap, &new_bmap, 0,
> > + __ATOMIC_RELEASE,
> __ATOMIC_RELAXED);
> > +
> > + if (success)
> > + __atomic_fetch_sub(&v->num_threads,
> > + 1, __ATOMIC_RELAXED);
> > + else if (old_bmap & ~(1UL << id))
> > + /* Someone else unregistered this thread.
> > + * Counter should not be incremented.
> > + */
> > + return 0;
> > + } while (success == 0);
>
> Ditto!
>
> > + return 0;
> > +}
> > +
> > +/* Dump the details of a single quiescent state variable to a file.
> > +*/ int __rte_experimental rte_rcu_qsbr_dump(FILE *f, struct
> > +rte_rcu_qsbr *v) {
> > + uint64_t bmap;
> > + uint32_t i, t;
> > +
> > + if (v == NULL || f == NULL) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> > +
> > + fprintf(f, " QS variable memory size = %lu\n",
> > + rte_rcu_qsbr_get_memsize(v-
> >max_threads));
> > + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> > + fprintf(f, " Current # threads = %u\n", v->num_threads);
> > +
> > + fprintf(f, " Registered thread ID mask = 0x");
> > + for (i = 0; i < v->num_elems; i++)
> > + fprintf(f, "%lx", __atomic_load_n(
> > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + __ATOMIC_ACQUIRE));
> > + fprintf(f, "\n");
> > +
> > + fprintf(f, " Token = %lu\n",
> > + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> > +
> > + fprintf(f, "Quiescent State Counts for readers:\n");
> > + for (i = 0; i < v->num_elems; i++) {
> > + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v,
> i),
> > + __ATOMIC_ACQUIRE);
> > + while (bmap) {
> > + t = __builtin_ctzl(bmap);
> > + fprintf(f, "thread ID = %d, count = %lu\n", t,
> > + __atomic_load_n(
> > + &v->qsbr_cnt[i].cnt,
> > + __ATOMIC_RELAXED));
> > + bmap &= ~(1UL << t);
> > + }
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +int rcu_log_type;
> > +
> > +RTE_INIT(rte_rcu_register)
> > +{
> > + rcu_log_type = rte_log_register("lib.rcu");
> > + if (rcu_log_type >= 0)
> > + rte_log_set_level(rcu_log_type, RTE_LOG_ERR); }
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > 000000000..ff696aeab
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > @@ -0,0 +1,554 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_RCU_QSBR_H_
> > +#define _RTE_RCU_QSBR_H_
> > +
> > +/**
> > + * @file
> > + * RTE Quiescent State Based Reclamation (QSBR)
> > + *
> > + * Quiescent State (QS) is any point in the thread execution
> > + * where the thread does not hold a reference to a data structure
> > + * in shared memory. While using lock-less data structures, the
> > +writer
> > + * can safely free memory once all the reader threads have entered
> > + * quiescent state.
> > + *
> > + * This library provides the ability for the readers to report
> > +quiescent
> > + * state and for the writers to identify when all the readers have
> > + * entered quiescent state.
> > + */
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#include <stdio.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +#include <rte_common.h>
> > +#include <rte_memory.h>
> > +#include <rte_lcore.h>
> > +#include <rte_debug.h>
> > +#include <rte_atomic.h>
> > +
> > +extern int rcu_log_type;
> > +
> > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define RCU_DP_LOG(level,
> fmt,
> > +args...) \
> > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > +RCU_DP_LOG(level, fmt, args...) #endif
> > +
> > +/* Registered thread IDs are stored as a bitmap of 64b element array.
> > + * Given thread id needs to be converted to index into the array and
> > + * the id within the array element.
> > + */
> > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> #define
> > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> RTE_CACHE_LINE_SIZE) #define
> > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i) #define
> > +RTE_QSBR_THRID_INDEX_SHIFT 6 #define RTE_QSBR_THRID_MASK 0x3f
> #define
> > +RTE_QSBR_THRID_INVALID 0xffffffff
> > +
> > +/* Worker thread counter */
> > +struct rte_rcu_qsbr_cnt {
> > + uint64_t cnt;
> > + /**< Quiescent state counter. Value 0 indicates the thread is
> > +offline */ } __rte_cache_aligned;
> > +
> > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > +#define RTE_QSBR_CNT_INIT 1
> > +
> > +/* RTE Quiescent State variable structure.
> > + * This structure has two elements that vary in size based on the
> > + * 'max_threads' parameter.
> > + * 1) Quiescent state counter array
> > + * 2) Register thread ID array
> > + */
> > +struct rte_rcu_qsbr {
> > + uint64_t token __rte_cache_aligned;
> > + /**< Counter to allow for multiple concurrent quiescent state
> > +queries */
> > +
> > + uint32_t num_elems __rte_cache_aligned;
> > + /**< Number of elements in the thread ID array */
> > + uint32_t num_threads;
> > + /**< Number of threads currently using this QS variable */
> > + uint32_t max_threads;
> > + /**< Maximum number of threads using this QS variable */
> > +
> > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > + /**< Quiescent state counter array of 'max_threads' elements */
> > +
> > + /**< Registered thread IDs are stored in a bitmap array,
> > + * after the quiescent state counter array.
> > + */
> > +} __rte_cache_aligned;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Return the size of the memory occupied by a Quiescent State
> variable.
> > + *
> > + * @param max_threads
> > + * Maximum number of threads reporting quiescent state on this
> variable.
> > + * @return
> > + * On success - size of memory in bytes required for this QS variable.
> > + * On error - 1 with error code set in rte_errno.
> > + * Possible rte_errno codes are:
> > + * - EINVAL - max_threads is 0
> > + */
> > +size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Initialize a Quiescent State (QS) variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param max_threads
> > + * Maximum number of threads reporting quiescent state on this
> variable.
> > + * This should be the same value as passed to
> rte_rcu_qsbr_get_memsize.
> > + * @return
> > + * On success - 0
> > + * On error - 1 with error code set in rte_errno.
> > + * Possible rte_errno codes are:
> > + * - EINVAL - max_threads is 0 or 'v' is NULL.
> > + *
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Register a reader thread to report its quiescent state
> > + * on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + * Any reader thread that wants to report its quiescent state must
> > + * call this API. This can be called during initialization or as part
> > + * of the packet processing loop.
> > + *
> > + * Note that rte_rcu_qsbr_thread_online must be called before the
> > + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable. thread_id is a value between 0 and (max_threads -
> 1).
> > + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a reader thread, from the list of threads reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * This API can be called from the reader threads during shutdown.
> > + * Ongoing quiescent state queries will stop waiting for the status
> > +from this
> > + * unregistered reader thread.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will stop reporting its quiescent
> > + * state on the QS variable.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a registered reader thread, to the list of threads reporting
> > +their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * Any registered reader thread that wants to report its quiescent
> > +state must
> > + * call this API before calling rte_rcu_qsbr_quiescent. This can be
> > +called
> > + * during initialization or as part of the packet processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * The reader thread must call rte_rcu_thread_online API, after the
> > +blocking
> > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > + * waits for the reader thread to update its quiescent state.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id)
>
> I am not clear on why this function should be inline. Or do you have use
> cases where threads go offline and come back online extremely frequently?
Yes, there are use cases where the function call to receive the packets can block.
>
> > +{
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* Copy the current value of token.
> > + * The fence at the end of the function will ensure that
> > + * the following will not move down after the load of any shared
> > + * data structure.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > +
> > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > + * 'cnt' (64b) is accessed atomically.
> > + */
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + t, __ATOMIC_RELAXED);
> > +
> > + /* The subsequent load of the data structure should not
> > + * move above the store. Hence a store-load barrier
> > + * is required.
> > + * If the load of the data structure moves above the store,
> > + * writer might not see that the reader is online, even though
> > + * the reader is referencing the shared data structure.
> > + */
> > +#ifdef RTE_ARCH_X86_64
> > + /* rte_smp_mb() for x86 is lighter */
> > + rte_smp_mb();
> > +#else
> > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> > +#endif
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a registered reader thread from the list of threads
> > +reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * This can be called during initialization or as part of the packet
> > + * processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * rte_rcu_qsbr_check API will not wait for the reader thread with
> > + * this thread ID to report its quiescent state on the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id)
>
> Same here on inlining.
>
> > +{
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* The reader can go offline only after the load of the
> > + * data structure is completed. i.e. any load of the
> > + * data strcture can not move after this store.
> > + */
> > +
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Ask the reader threads to report the quiescent state
> > + * status.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from worker threads.
> > + *
> > + * @param v
> > + * QS variable
> > + * @return
> > + * - This is the token for this call of the API. This should be
> > + * passed to rte_rcu_qsbr_check API.
> > + */
> > +static __rte_always_inline uint64_t __rte_experimental
> > +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL);
> > +
> > + /* Release the changes to the shared data structure.
> > + * This store release will ensure that changes to any data
> > + * structure are visible to the workers before the token
> > + * update is visible.
> > + */
> > + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> > +
> > + return t;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Update quiescent state for a reader thread.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * All the reader threads registered to report their quiescent state
> > + * on the QS variable must call this API.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Update the quiescent state for the reader with this thread ID.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* Acquire the changes to the shared data structure released
> > + * by rte_rcu_qsbr_start.
> > + * Later loads of the shared data structure should not move
> > + * above this load. Hence, use load-acquire.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> > +
> > + /* Inform the writer that updates are visible to this reader.
> > + * Prior loads of the shared data structure should not move
> > + * beyond this store. Hence use store-release.
> > + */
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + t, __ATOMIC_RELEASE);
> > +
> > + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> > + __func__, t, thread_id);
> > +}
> > +
> > +/* Check the quiescent state counter for registered threads only,
> > +assuming
> > + * that not all threads have registered.
> > + */
> > +static __rte_always_inline int
> > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool
> > +wait) {
> > + uint32_t i, j, id;
> > + uint64_t bmap;
> > + uint64_t c;
> > + uint64_t *reg_thread_id;
> > +
> > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > + i < v->num_elems;
> > + i++, reg_thread_id++) {
> > + /* Load the current registered thread bit map before
> > + * loading the reader thread quiescent state counters.
> > + */
> > + bmap = __atomic_load_n(reg_thread_id,
> __ATOMIC_ACQUIRE);
> > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + while (bmap) {
> > + j = __builtin_ctzl(bmap);
> > + RCU_DP_LOG(DEBUG,
> > + "%s: check: token = %lu, wait = %d, Bit Map
> = 0x%lx, Thread ID = %d",
> > + __func__, t, wait, bmap, id + j);
> > + c = __atomic_load_n(
> > + &v->qsbr_cnt[id + j].cnt,
> > + __ATOMIC_ACQUIRE);
> > + RCU_DP_LOG(DEBUG,
> > + "%s: status: token = %lu, wait = %d, Thread
> QS cnt = %lu, Thread ID = %d",
> > + __func__, t, wait, c, id+j);
> > + /* Counter is not checked for wrap-around
> condition
> > + * as it is a 64b counter.
> > + */
> > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> < t)) {
>
> This assumes that a 64-bit counter won't overflow, which is close enough
> to true given current CPU clock frequencies. ;-)
>
> > + /* This thread is not in quiescent state */
> > + if (!wait)
> > + return 0;
> > +
> > + rte_pause();
> > + /* This thread might have unregistered.
> > + * Re-read the bitmap.
> > + */
> > + bmap = __atomic_load_n(reg_thread_id,
> > + __ATOMIC_ACQUIRE);
> > +
> > + continue;
> > + }
> > +
> > + bmap &= ~(1UL << j);
> > + }
> > + }
> > +
> > + return 1;
> > +}
> > +
> > +/* Check the quiescent state counter for all threads, assuming that
> > + * all the threads have registered.
> > + */
> > +static __rte_always_inline int
> > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
>
> Does checking the bitmap really take long enough to make this worthwhile
> as a separate function? I would think that the bitmap-checking time
> would be lost in the noise of cache misses from the ->cnt loads.
It avoids accessing one cache line. I think this is where the savings are (may be in theory). This is the most probable use case.
On the other hand, __rcu_qsbr_check_selective() will result in savings (depending on how many threads are currently registered) by avoiding accessing unwanted counters.
>
> Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in the
> absence of readers, you might see __rcu_qsbr_check_all() being a bit
> faster. But is that really what DPDK does?
I see improvements in the synthetic test case (similar to the one you have described, around 27%). However, in the more practical test cases I do not see any difference.
>
> > +{
> > + uint32_t i;
> > + struct rte_rcu_qsbr_cnt *cnt;
> > + uint64_t c;
> > +
> > + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> > + RCU_DP_LOG(DEBUG,
> > + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> > + __func__, t, wait, i);
> > + while (1) {
> > + c = __atomic_load_n(&cnt->cnt,
> __ATOMIC_ACQUIRE);
> > + RCU_DP_LOG(DEBUG,
> > + "%s: status: token = %lu, wait = %d, Thread
> QS cnt = %lu, Thread ID = %d",
> > + __func__, t, wait, c, i);
> > + /* Counter is not checked for wrap-around
> condition
> > + * as it is a 64b counter.
> > + */
> > + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >=
> t))
> > + break;
> > +
> > + /* This thread is not in quiescent state */
> > + if (!wait)
> > + return 0;
> > +
> > + rte_pause();
> > + }
> > + }
> > +
> > + return 1;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Checks if all the reader threads have entered the quiescent state
> > + * referenced by token.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from the worker threads as well.
> > + *
> > + * If this API is called with 'wait' set to true, the following
> > + * factors must be considered:
> > + *
> > + * 1) If the calling thread is also reporting the status on the
> > + * same QS variable, it must update the quiescent state status,
> > +before
> > + * calling this API.
> > + *
> > + * 2) In addition, while calling from multiple threads, only
> > + * one of those threads can be reporting the quiescent state status
> > + * on a given QS variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param t
> > + * Token returned by rte_rcu_qsbr_start API
> > + * @param wait
> > + * If true, block till all the reader threads have completed entering
> > + * the quiescent state referenced by token 't'.
> > + * @return
> > + * - 0 if all reader threads have NOT passed through specified number
> > + * of quiescent states.
> > + * - 1 if all reader threads have passed through specified number
> > + * of quiescent states.
> > + */
> > +static __rte_always_inline int __rte_experimental
> > +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait) {
> > + RTE_ASSERT(v != NULL);
> > +
> > + if (likely(v->num_threads == v->max_threads))
> > + return __rcu_qsbr_check_all(v, t, wait);
> > + else
> > + return __rcu_qsbr_check_selective(v, t, wait); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Wait till the reader threads have entered quiescent state.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * This API can be thought of as a wrapper around rte_rcu_qsbr_start
> > +and
> > + * rte_rcu_qsbr_check APIs.
> > + *
> > + * If this API is called from multiple threads, only one of
> > + * those threads can be reporting the quiescent state status on a
> > + * given QS variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Thread ID of the caller if it is registered to report quiescent state
> > + * on this QS variable (i.e. the calling thread is also part of the
> > + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL);
> > +
> > + t = rte_rcu_qsbr_start(v);
> > +
> > + /* If the current thread has readside critical section,
> > + * update its quiescent state status.
> > + */
> > + if (thread_id != RTE_QSBR_THRID_INVALID)
> > + rte_rcu_qsbr_quiescent(v, thread_id);
> > +
> > + /* Wait for other readers to enter quiescent state */
> > + rte_rcu_qsbr_check(v, t, true);
>
> And you are presumably relying on 64-bit counters to avoid the need to
> execute the above code twice in succession. Which again works given
> current CPU clock rates combined with system and human lifespans.
> Otherwise, there are interesting race conditions that can happen, so don't
> try this with a 32-bit counter!!!
Yes. I am relying on 64-bit counters to avoid having to spend cycles (and time).
>
> (But think of the great^N grandchildren!!!)
(It is an interesting thought. I wonder what would happen to all the code we are writing today 😊)
>
> More seriously, a comment warning people not to make the counter be 32
> bits is in order.
Agree, I will add it in the structure definition.
>
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Dump the details of a single QS variables to a file.
> > + *
> > + * It is NOT multi-thread safe.
> > + *
> > + * @param f
> > + * A pointer to a file for output
> > + * @param v
> > + * QS variable
> > + * @return
> > + * On success - 0
> > + * On error - 1 with error code set in rte_errno.
> > + * Possible rte_errno codes are:
> > + * - EINVAL - NULL parameters are passed
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> > +
> > +#ifdef __cplusplus
> > +}
> > +#endif
> > +
> > +#endif /* _RTE_RCU_QSBR_H_ */
> > diff --git a/lib/librte_rcu/rte_rcu_version.map
> > b/lib/librte_rcu/rte_rcu_version.map
> > new file mode 100644
> > index 000000000..ad8cb517c
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_version.map
> > @@ -0,0 +1,11 @@
> > +EXPERIMENTAL {
> > + global:
> > +
> > + rte_rcu_qsbr_get_memsize;
> > + rte_rcu_qsbr_init;
> > + rte_rcu_qsbr_thread_register;
> > + rte_rcu_qsbr_thread_unregister;
> > + rte_rcu_qsbr_dump;
> > +
> > + local: *;
> > +};
> > diff --git a/lib/meson.build b/lib/meson.build index
> > 595314d7d..67be10659 100644
> > --- a/lib/meson.build
> > +++ b/lib/meson.build
> > @@ -22,7 +22,7 @@ libraries = [
> > 'gro', 'gso', 'ip_frag', 'jobstats',
> > 'kni', 'latencystats', 'lpm', 'member',
> > 'power', 'pdump', 'rawdev',
> > - 'reorder', 'sched', 'security', 'stack', 'vhost',
> > + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> > #ipsec lib depends on crypto and security
> > 'ipsec',
> > # add pkt framework libs which use other libs from above diff --git
> > a/mk/rte.app.mk b/mk/rte.app.mk index 7d994bece..e93cc366d 100644
> > --- a/mk/rte.app.mk
> > +++ b/mk/rte.app.mk
> > @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -
> lrte_eal
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> > +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
> >
> > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> > --
> > 2.17.1
> >
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-11 4:35 ` Honnappa Nagarahalli
@ 2019-04-11 4:35 ` Honnappa Nagarahalli
2019-04-11 15:26 ` Paul E. McKenney
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-11 4:35 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
Hi Paul,
Thank you for your feedback.
> -----Original Message-----
> From: Paul E. McKenney <paulmck@linux.ibm.com>
> Sent: Wednesday, April 10, 2019 1:15 PM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> marko.kovacevic@intel.com; dev@dpdk.org; Gavin Hu (Arm Technology
> China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: Re: [PATCH v4 1/3] rcu: add RCU library supporting QSBR
> mechanism
>
> On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli wrote:
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
>
> I don't see any sign of read-side markers (rcu_read_lock() and
> rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
>
> Yes, strictly speaking, these are not needed for QSBR to operate, but they
These APIs would be empty for QSBR.
> make it way easier to maintain and debug code using RCU. For example,
> given the read-side markers, you can check for errors like having a call to
> rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> Without those read-side markers, life can be quite hard and you will really
> hate yourself for failing to have provided them.
Want to make sure I understood this, do you mean the application would mark before and after accessing the shared data structure on the reader side?
rte_rcu_qsbr_lock()
<begin access shared data structure>
...
...
<end access shared data structure>
rte_rcu_qsbr_unlock()
If someone is debugging this code, they have to make sure that there is an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in between.
It sounds good to me. Obviously, they will not add any additional cycles as well.
Please let me know if my understanding is correct.
>
> Some additional questions and comments interspersed.
>
> Thanx, Paul
>
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> > MAINTAINERS | 5 +
> > config/common_base | 6 +
> > lib/Makefile | 2 +
> > lib/librte_rcu/Makefile | 23 ++
> > lib/librte_rcu/meson.build | 5 +
> > lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
> > lib/librte_rcu/rte_rcu_qsbr.h | 554
> +++++++++++++++++++++++++++++
> > lib/librte_rcu/rte_rcu_version.map | 11 +
> > lib/meson.build | 2 +-
> > mk/rte.app.mk | 1 +
> > 10 files changed, 845 insertions(+), 1 deletion(-) create mode
> > 100644 lib/librte_rcu/Makefile create mode 100644
> > lib/librte_rcu/meson.build create mode 100644
> > lib/librte_rcu/rte_rcu_qsbr.c create mode 100644
> > lib/librte_rcu/rte_rcu_qsbr.h create mode 100644
> > lib/librte_rcu/rte_rcu_version.map
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS index 9774344dd..6e9766eed
> > 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -1267,6 +1267,11 @@ F: examples/bpf/
> > F: app/test/test_bpf.c
> > F: doc/guides/prog_guide/bpf_lib.rst
> >
> > +RCU - EXPERIMENTAL
> > +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > +F: lib/librte_rcu/
> > +F: doc/guides/prog_guide/rcu_lib.rst
> > +
> >
> > Test Applications
> > -----------------
> > diff --git a/config/common_base b/config/common_base index
> > 8da08105b..ad70c79e1 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y #
> > CONFIG_RTE_LIBRTE_TELEMETRY=n
> >
> > +#
> > +# Compile librte_rcu
> > +#
> > +CONFIG_RTE_LIBRTE_RCU=y
> > +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> > +
> > #
> > # Compile librte_lpm
> > #
> > diff --git a/lib/Makefile b/lib/Makefile index 26021d0c0..791e0d991
> > 100644
> > --- a/lib/Makefile
> > +++ b/lib/Makefile
> > @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) +=
> librte_ipsec
> > DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev
> > librte_security
> > DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> > DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> > +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu DEPDIRS-librte_rcu :=
> > +librte_eal
> >
> > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git
> > a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile new file mode
> > 100644 index 000000000..6aa677bd1
> > --- /dev/null
> > +++ b/lib/librte_rcu/Makefile
> > @@ -0,0 +1,23 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > +Limited
> > +
> > +include $(RTE_SDK)/mk/rte.vars.mk
> > +
> > +# library name
> > +LIB = librte_rcu.a
> > +
> > +CFLAGS += -DALLOW_EXPERIMENTAL_API
> > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 LDLIBS += -lrte_eal
> > +
> > +EXPORT_MAP := rte_rcu_version.map
> > +
> > +LIBABIVER := 1
> > +
> > +# all source are stored in SRCS-y
> > +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> > +
> > +# install includes
> > +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> > +
> > +include $(RTE_SDK)/mk/rte.lib.mk
> > diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> > new file mode 100644 index 000000000..c009ae4b7
> > --- /dev/null
> > +++ b/lib/librte_rcu/meson.build
> > @@ -0,0 +1,5 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > +Limited
> > +
> > +sources = files('rte_rcu_qsbr.c')
> > +headers = files('rte_rcu_qsbr.h')
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.c
> > b/lib/librte_rcu/rte_rcu_qsbr.c new file mode 100644 index
> > 000000000..53d08446a
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> > @@ -0,0 +1,237 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + *
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#include <stdio.h>
> > +#include <string.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_log.h>
> > +#include <rte_memory.h>
> > +#include <rte_malloc.h>
> > +#include <rte_eal.h>
> > +#include <rte_eal_memconfig.h>
> > +#include <rte_atomic.h>
> > +#include <rte_per_lcore.h>
> > +#include <rte_lcore.h>
> > +#include <rte_errno.h>
> > +
> > +#include "rte_rcu_qsbr.h"
> > +
> > +/* Get the memory size of QSBR variable */ size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads) {
> > + size_t sz;
> > +
> > + if (max_threads == 0) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid max_threads %u\n",
> > + __func__, max_threads);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + sz = sizeof(struct rte_rcu_qsbr);
> > +
> > + /* Add the size of quiescent state counter array */
> > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > +
> > + /* Add the size of the registered thread ID bitmap array */
> > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > +
> > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
>
> Given that you align here, should you also align in the earlier steps in the
> computation of sz?
Agree. I will remove the align here and keep the earlier one as the intent is to align the thread ID array.
>
> > +}
> > +
> > +/* Initialize a quiescent state variable */ int __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) {
> > + size_t sz;
> > +
> > + if (v == NULL) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > + if (sz == 1)
> > + return 1;
> > +
> > + /* Set all the threads to offline */
> > + memset(v, 0, sz);
>
> We calculate sz here, but it looks like the caller must also calculate it in
> order to correctly allocate the memory referenced by the "v" argument to
> this function, with bad things happening if the two calculations get
> different results. Should "v" instead be allocated within this function to
> avoid this sort of problem?
Earlier version allocated the memory with-in this library. However, it was decided to go with the current implementation as it provides flexibility for the application to manage the memory as it sees fit. For ex: it could allocate this as part of another structure in a single allocation. This also falls inline with similar approach taken in other libraries.
>
> > + v->max_threads = max_threads;
> > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > + v->token = RTE_QSBR_CNT_INIT;
> > +
> > + return 0;
> > +}
> > +
> > +/* Register a reader thread to report its quiescent state
> > + * on a QS variable.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id, success;
> > + uint64_t old_bmap, new_bmap;
> > +
> > + if (v == NULL || thread_id >= v->max_threads) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Make sure that the counter for registered threads does not
> > + * go out of sync. Hence, additional checks are required.
> > + */
> > + /* Check if the thread is already registered */
> > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + __ATOMIC_RELAXED);
> > + if (old_bmap & 1UL << id)
> > + return 0;
> > +
> > + do {
> > + new_bmap = old_bmap | (1UL << id);
> > + success = __atomic_compare_exchange(
> > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + &old_bmap, &new_bmap, 0,
> > + __ATOMIC_RELEASE,
> __ATOMIC_RELAXED);
> > +
> > + if (success)
> > + __atomic_fetch_add(&v->num_threads,
> > + 1, __ATOMIC_RELAXED);
> > + else if (old_bmap & (1UL << id))
> > + /* Someone else registered this thread.
> > + * Counter should not be incremented.
> > + */
> > + return 0;
> > + } while (success == 0);
>
> This would be simpler if threads were required to register themselves.
> Maybe you have use cases requiring registration of other threads, but this
> capability is adding significant complexity, so it might be worth some
> thought.
>
It was simple earlier (__atomic_fetch_or). The complexity is added as 'num_threads' should not go out of sync.
> > + return 0;
> > +}
> > +
> > +/* Remove a reader thread, from the list of threads reporting their
> > + * quiescent state on a QS variable.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + unsigned int i, id, success;
> > + uint64_t old_bmap, new_bmap;
> > +
> > + if (v == NULL || thread_id >= v->max_threads) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + id = thread_id & RTE_QSBR_THRID_MASK;
> > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + /* Make sure that the counter for registered threads does not
> > + * go out of sync. Hence, additional checks are required.
> > + */
> > + /* Check if the thread is already unregistered */
> > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + __ATOMIC_RELAXED);
> > + if (old_bmap & ~(1UL << id))
> > + return 0;
> > +
> > + do {
> > + new_bmap = old_bmap & ~(1UL << id);
> > + /* Make sure any loads of the shared data structure are
> > + * completed before removal of the thread from the list of
> > + * reporting threads.
> > + */
> > + success = __atomic_compare_exchange(
> > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + &old_bmap, &new_bmap, 0,
> > + __ATOMIC_RELEASE,
> __ATOMIC_RELAXED);
> > +
> > + if (success)
> > + __atomic_fetch_sub(&v->num_threads,
> > + 1, __ATOMIC_RELAXED);
> > + else if (old_bmap & ~(1UL << id))
> > + /* Someone else unregistered this thread.
> > + * Counter should not be incremented.
> > + */
> > + return 0;
> > + } while (success == 0);
>
> Ditto!
>
> > + return 0;
> > +}
> > +
> > +/* Dump the details of a single quiescent state variable to a file.
> > +*/ int __rte_experimental rte_rcu_qsbr_dump(FILE *f, struct
> > +rte_rcu_qsbr *v) {
> > + uint64_t bmap;
> > + uint32_t i, t;
> > +
> > + if (v == NULL || f == NULL) {
> > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > + "%s(): Invalid input parameter\n", __func__);
> > + rte_errno = EINVAL;
> > +
> > + return 1;
> > + }
> > +
> > + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> > +
> > + fprintf(f, " QS variable memory size = %lu\n",
> > + rte_rcu_qsbr_get_memsize(v-
> >max_threads));
> > + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> > + fprintf(f, " Current # threads = %u\n", v->num_threads);
> > +
> > + fprintf(f, " Registered thread ID mask = 0x");
> > + for (i = 0; i < v->num_elems; i++)
> > + fprintf(f, "%lx", __atomic_load_n(
> > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > + __ATOMIC_ACQUIRE));
> > + fprintf(f, "\n");
> > +
> > + fprintf(f, " Token = %lu\n",
> > + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> > +
> > + fprintf(f, "Quiescent State Counts for readers:\n");
> > + for (i = 0; i < v->num_elems; i++) {
> > + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v,
> i),
> > + __ATOMIC_ACQUIRE);
> > + while (bmap) {
> > + t = __builtin_ctzl(bmap);
> > + fprintf(f, "thread ID = %d, count = %lu\n", t,
> > + __atomic_load_n(
> > + &v->qsbr_cnt[i].cnt,
> > + __ATOMIC_RELAXED));
> > + bmap &= ~(1UL << t);
> > + }
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +int rcu_log_type;
> > +
> > +RTE_INIT(rte_rcu_register)
> > +{
> > + rcu_log_type = rte_log_register("lib.rcu");
> > + if (rcu_log_type >= 0)
> > + rte_log_set_level(rcu_log_type, RTE_LOG_ERR); }
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > 000000000..ff696aeab
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > @@ -0,0 +1,554 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_RCU_QSBR_H_
> > +#define _RTE_RCU_QSBR_H_
> > +
> > +/**
> > + * @file
> > + * RTE Quiescent State Based Reclamation (QSBR)
> > + *
> > + * Quiescent State (QS) is any point in the thread execution
> > + * where the thread does not hold a reference to a data structure
> > + * in shared memory. While using lock-less data structures, the
> > +writer
> > + * can safely free memory once all the reader threads have entered
> > + * quiescent state.
> > + *
> > + * This library provides the ability for the readers to report
> > +quiescent
> > + * state and for the writers to identify when all the readers have
> > + * entered quiescent state.
> > + */
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#include <stdio.h>
> > +#include <stdint.h>
> > +#include <errno.h>
> > +#include <rte_common.h>
> > +#include <rte_memory.h>
> > +#include <rte_lcore.h>
> > +#include <rte_debug.h>
> > +#include <rte_atomic.h>
> > +
> > +extern int rcu_log_type;
> > +
> > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define RCU_DP_LOG(level,
> fmt,
> > +args...) \
> > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > +RCU_DP_LOG(level, fmt, args...) #endif
> > +
> > +/* Registered thread IDs are stored as a bitmap of 64b element array.
> > + * Given thread id needs to be converted to index into the array and
> > + * the id within the array element.
> > + */
> > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> #define
> > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> RTE_CACHE_LINE_SIZE) #define
> > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i) #define
> > +RTE_QSBR_THRID_INDEX_SHIFT 6 #define RTE_QSBR_THRID_MASK 0x3f
> #define
> > +RTE_QSBR_THRID_INVALID 0xffffffff
> > +
> > +/* Worker thread counter */
> > +struct rte_rcu_qsbr_cnt {
> > + uint64_t cnt;
> > + /**< Quiescent state counter. Value 0 indicates the thread is
> > +offline */ } __rte_cache_aligned;
> > +
> > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > +#define RTE_QSBR_CNT_INIT 1
> > +
> > +/* RTE Quiescent State variable structure.
> > + * This structure has two elements that vary in size based on the
> > + * 'max_threads' parameter.
> > + * 1) Quiescent state counter array
> > + * 2) Register thread ID array
> > + */
> > +struct rte_rcu_qsbr {
> > + uint64_t token __rte_cache_aligned;
> > + /**< Counter to allow for multiple concurrent quiescent state
> > +queries */
> > +
> > + uint32_t num_elems __rte_cache_aligned;
> > + /**< Number of elements in the thread ID array */
> > + uint32_t num_threads;
> > + /**< Number of threads currently using this QS variable */
> > + uint32_t max_threads;
> > + /**< Maximum number of threads using this QS variable */
> > +
> > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > + /**< Quiescent state counter array of 'max_threads' elements */
> > +
> > + /**< Registered thread IDs are stored in a bitmap array,
> > + * after the quiescent state counter array.
> > + */
> > +} __rte_cache_aligned;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Return the size of the memory occupied by a Quiescent State
> variable.
> > + *
> > + * @param max_threads
> > + * Maximum number of threads reporting quiescent state on this
> variable.
> > + * @return
> > + * On success - size of memory in bytes required for this QS variable.
> > + * On error - 1 with error code set in rte_errno.
> > + * Possible rte_errno codes are:
> > + * - EINVAL - max_threads is 0
> > + */
> > +size_t __rte_experimental
> > +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Initialize a Quiescent State (QS) variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param max_threads
> > + * Maximum number of threads reporting quiescent state on this
> variable.
> > + * This should be the same value as passed to
> rte_rcu_qsbr_get_memsize.
> > + * @return
> > + * On success - 0
> > + * On error - 1 with error code set in rte_errno.
> > + * Possible rte_errno codes are:
> > + * - EINVAL - max_threads is 0 or 'v' is NULL.
> > + *
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Register a reader thread to report its quiescent state
> > + * on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + * Any reader thread that wants to report its quiescent state must
> > + * call this API. This can be called during initialization or as part
> > + * of the packet processing loop.
> > + *
> > + * Note that rte_rcu_qsbr_thread_online must be called before the
> > + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable. thread_id is a value between 0 and (max_threads -
> 1).
> > + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a reader thread, from the list of threads reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * This API can be called from the reader threads during shutdown.
> > + * Ongoing quiescent state queries will stop waiting for the status
> > +from this
> > + * unregistered reader thread.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will stop reporting its quiescent
> > + * state on the QS variable.
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a registered reader thread, to the list of threads reporting
> > +their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * Any registered reader thread that wants to report its quiescent
> > +state must
> > + * call this API before calling rte_rcu_qsbr_quiescent. This can be
> > +called
> > + * during initialization or as part of the packet processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * The reader thread must call rte_rcu_thread_online API, after the
> > +blocking
> > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > + * waits for the reader thread to update its quiescent state.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id)
>
> I am not clear on why this function should be inline. Or do you have use
> cases where threads go offline and come back online extremely frequently?
Yes, there are use cases where the function call to receive the packets can block.
>
> > +{
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* Copy the current value of token.
> > + * The fence at the end of the function will ensure that
> > + * the following will not move down after the load of any shared
> > + * data structure.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > +
> > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > + * 'cnt' (64b) is accessed atomically.
> > + */
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + t, __ATOMIC_RELAXED);
> > +
> > + /* The subsequent load of the data structure should not
> > + * move above the store. Hence a store-load barrier
> > + * is required.
> > + * If the load of the data structure moves above the store,
> > + * writer might not see that the reader is online, even though
> > + * the reader is referencing the shared data structure.
> > + */
> > +#ifdef RTE_ARCH_X86_64
> > + /* rte_smp_mb() for x86 is lighter */
> > + rte_smp_mb();
> > +#else
> > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> > +#endif
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a registered reader thread from the list of threads
> > +reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * This can be called during initialization or as part of the packet
> > + * processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * rte_rcu_qsbr_check API will not wait for the reader thread with
> > + * this thread ID to report its quiescent state on the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id)
>
> Same here on inlining.
>
> > +{
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* The reader can go offline only after the load of the
> > + * data structure is completed. i.e. any load of the
> > + * data strcture can not move after this store.
> > + */
> > +
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Ask the reader threads to report the quiescent state
> > + * status.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from worker threads.
> > + *
> > + * @param v
> > + * QS variable
> > + * @return
> > + * - This is the token for this call of the API. This should be
> > + * passed to rte_rcu_qsbr_check API.
> > + */
> > +static __rte_always_inline uint64_t __rte_experimental
> > +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL);
> > +
> > + /* Release the changes to the shared data structure.
> > + * This store release will ensure that changes to any data
> > + * structure are visible to the workers before the token
> > + * update is visible.
> > + */
> > + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> > +
> > + return t;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Update quiescent state for a reader thread.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * All the reader threads registered to report their quiescent state
> > + * on the QS variable must call this API.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Update the quiescent state for the reader with this thread ID.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* Acquire the changes to the shared data structure released
> > + * by rte_rcu_qsbr_start.
> > + * Later loads of the shared data structure should not move
> > + * above this load. Hence, use load-acquire.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> > +
> > + /* Inform the writer that updates are visible to this reader.
> > + * Prior loads of the shared data structure should not move
> > + * beyond this store. Hence use store-release.
> > + */
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + t, __ATOMIC_RELEASE);
> > +
> > + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> > + __func__, t, thread_id);
> > +}
> > +
> > +/* Check the quiescent state counter for registered threads only,
> > +assuming
> > + * that not all threads have registered.
> > + */
> > +static __rte_always_inline int
> > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool
> > +wait) {
> > + uint32_t i, j, id;
> > + uint64_t bmap;
> > + uint64_t c;
> > + uint64_t *reg_thread_id;
> > +
> > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > + i < v->num_elems;
> > + i++, reg_thread_id++) {
> > + /* Load the current registered thread bit map before
> > + * loading the reader thread quiescent state counters.
> > + */
> > + bmap = __atomic_load_n(reg_thread_id,
> __ATOMIC_ACQUIRE);
> > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > +
> > + while (bmap) {
> > + j = __builtin_ctzl(bmap);
> > + RCU_DP_LOG(DEBUG,
> > + "%s: check: token = %lu, wait = %d, Bit Map
> = 0x%lx, Thread ID = %d",
> > + __func__, t, wait, bmap, id + j);
> > + c = __atomic_load_n(
> > + &v->qsbr_cnt[id + j].cnt,
> > + __ATOMIC_ACQUIRE);
> > + RCU_DP_LOG(DEBUG,
> > + "%s: status: token = %lu, wait = %d, Thread
> QS cnt = %lu, Thread ID = %d",
> > + __func__, t, wait, c, id+j);
> > + /* Counter is not checked for wrap-around
> condition
> > + * as it is a 64b counter.
> > + */
> > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> < t)) {
>
> This assumes that a 64-bit counter won't overflow, which is close enough
> to true given current CPU clock frequencies. ;-)
>
> > + /* This thread is not in quiescent state */
> > + if (!wait)
> > + return 0;
> > +
> > + rte_pause();
> > + /* This thread might have unregistered.
> > + * Re-read the bitmap.
> > + */
> > + bmap = __atomic_load_n(reg_thread_id,
> > + __ATOMIC_ACQUIRE);
> > +
> > + continue;
> > + }
> > +
> > + bmap &= ~(1UL << j);
> > + }
> > + }
> > +
> > + return 1;
> > +}
> > +
> > +/* Check the quiescent state counter for all threads, assuming that
> > + * all the threads have registered.
> > + */
> > +static __rte_always_inline int
> > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
>
> Does checking the bitmap really take long enough to make this worthwhile
> as a separate function? I would think that the bitmap-checking time
> would be lost in the noise of cache misses from the ->cnt loads.
It avoids accessing one cache line. I think this is where the savings are (may be in theory). This is the most probable use case.
On the other hand, __rcu_qsbr_check_selective() will result in savings (depending on how many threads are currently registered) by avoiding accessing unwanted counters.
>
> Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in the
> absence of readers, you might see __rcu_qsbr_check_all() being a bit
> faster. But is that really what DPDK does?
I see improvements in the synthetic test case (similar to the one you have described, around 27%). However, in the more practical test cases I do not see any difference.
>
> > +{
> > + uint32_t i;
> > + struct rte_rcu_qsbr_cnt *cnt;
> > + uint64_t c;
> > +
> > + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> > + RCU_DP_LOG(DEBUG,
> > + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> > + __func__, t, wait, i);
> > + while (1) {
> > + c = __atomic_load_n(&cnt->cnt,
> __ATOMIC_ACQUIRE);
> > + RCU_DP_LOG(DEBUG,
> > + "%s: status: token = %lu, wait = %d, Thread
> QS cnt = %lu, Thread ID = %d",
> > + __func__, t, wait, c, i);
> > + /* Counter is not checked for wrap-around
> condition
> > + * as it is a 64b counter.
> > + */
> > + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >=
> t))
> > + break;
> > +
> > + /* This thread is not in quiescent state */
> > + if (!wait)
> > + return 0;
> > +
> > + rte_pause();
> > + }
> > + }
> > +
> > + return 1;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Checks if all the reader threads have entered the quiescent state
> > + * referenced by token.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe and can be called from the worker threads as well.
> > + *
> > + * If this API is called with 'wait' set to true, the following
> > + * factors must be considered:
> > + *
> > + * 1) If the calling thread is also reporting the status on the
> > + * same QS variable, it must update the quiescent state status,
> > +before
> > + * calling this API.
> > + *
> > + * 2) In addition, while calling from multiple threads, only
> > + * one of those threads can be reporting the quiescent state status
> > + * on a given QS variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param t
> > + * Token returned by rte_rcu_qsbr_start API
> > + * @param wait
> > + * If true, block till all the reader threads have completed entering
> > + * the quiescent state referenced by token 't'.
> > + * @return
> > + * - 0 if all reader threads have NOT passed through specified number
> > + * of quiescent states.
> > + * - 1 if all reader threads have passed through specified number
> > + * of quiescent states.
> > + */
> > +static __rte_always_inline int __rte_experimental
> > +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait) {
> > + RTE_ASSERT(v != NULL);
> > +
> > + if (likely(v->num_threads == v->max_threads))
> > + return __rcu_qsbr_check_all(v, t, wait);
> > + else
> > + return __rcu_qsbr_check_selective(v, t, wait); }
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Wait till the reader threads have entered quiescent state.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread safe.
> > + * This API can be thought of as a wrapper around rte_rcu_qsbr_start
> > +and
> > + * rte_rcu_qsbr_check APIs.
> > + *
> > + * If this API is called from multiple threads, only one of
> > + * those threads can be reporting the quiescent state status on a
> > + * given QS variable.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Thread ID of the caller if it is registered to report quiescent state
> > + * on this QS variable (i.e. the calling thread is also part of the
> > + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL);
> > +
> > + t = rte_rcu_qsbr_start(v);
> > +
> > + /* If the current thread has readside critical section,
> > + * update its quiescent state status.
> > + */
> > + if (thread_id != RTE_QSBR_THRID_INVALID)
> > + rte_rcu_qsbr_quiescent(v, thread_id);
> > +
> > + /* Wait for other readers to enter quiescent state */
> > + rte_rcu_qsbr_check(v, t, true);
>
> And you are presumably relying on 64-bit counters to avoid the need to
> execute the above code twice in succession. Which again works given
> current CPU clock rates combined with system and human lifespans.
> Otherwise, there are interesting race conditions that can happen, so don't
> try this with a 32-bit counter!!!
Yes. I am relying on 64-bit counters to avoid having to spend cycles (and time).
>
> (But think of the great^N grandchildren!!!)
(It is an interesting thought. I wonder what would happen to all the code we are writing today 😊)
>
> More seriously, a comment warning people not to make the counter be 32
> bits is in order.
Agree, I will add it in the structure definition.
>
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Dump the details of a single QS variables to a file.
> > + *
> > + * It is NOT multi-thread safe.
> > + *
> > + * @param f
> > + * A pointer to a file for output
> > + * @param v
> > + * QS variable
> > + * @return
> > + * On success - 0
> > + * On error - 1 with error code set in rte_errno.
> > + * Possible rte_errno codes are:
> > + * - EINVAL - NULL parameters are passed
> > + */
> > +int __rte_experimental
> > +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> > +
> > +#ifdef __cplusplus
> > +}
> > +#endif
> > +
> > +#endif /* _RTE_RCU_QSBR_H_ */
> > diff --git a/lib/librte_rcu/rte_rcu_version.map
> > b/lib/librte_rcu/rte_rcu_version.map
> > new file mode 100644
> > index 000000000..ad8cb517c
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_version.map
> > @@ -0,0 +1,11 @@
> > +EXPERIMENTAL {
> > + global:
> > +
> > + rte_rcu_qsbr_get_memsize;
> > + rte_rcu_qsbr_init;
> > + rte_rcu_qsbr_thread_register;
> > + rte_rcu_qsbr_thread_unregister;
> > + rte_rcu_qsbr_dump;
> > +
> > + local: *;
> > +};
> > diff --git a/lib/meson.build b/lib/meson.build index
> > 595314d7d..67be10659 100644
> > --- a/lib/meson.build
> > +++ b/lib/meson.build
> > @@ -22,7 +22,7 @@ libraries = [
> > 'gro', 'gso', 'ip_frag', 'jobstats',
> > 'kni', 'latencystats', 'lpm', 'member',
> > 'power', 'pdump', 'rawdev',
> > - 'reorder', 'sched', 'security', 'stack', 'vhost',
> > + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> > #ipsec lib depends on crypto and security
> > 'ipsec',
> > # add pkt framework libs which use other libs from above diff --git
> > a/mk/rte.app.mk b/mk/rte.app.mk index 7d994bece..e93cc366d 100644
> > --- a/mk/rte.app.mk
> > +++ b/mk/rte.app.mk
> > @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -
> lrte_eal
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> > +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
> >
> > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> > --
> > 2.17.1
> >
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-11 4:35 ` Honnappa Nagarahalli
2019-04-11 4:35 ` Honnappa Nagarahalli
@ 2019-04-11 15:26 ` Paul E. McKenney
2019-04-11 15:26 ` Paul E. McKenney
2019-04-12 20:21 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-11 15:26 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Thu, Apr 11, 2019 at 04:35:04AM +0000, Honnappa Nagarahalli wrote:
> Hi Paul,
> Thank you for your feedback.
>
> > -----Original Message-----
> > From: Paul E. McKenney <paulmck@linux.ibm.com>
> > Sent: Wednesday, April 10, 2019 1:15 PM
> > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Cc: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> > marko.kovacevic@intel.com; dev@dpdk.org; Gavin Hu (Arm Technology
> > China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> > <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> > Subject: Re: [PATCH v4 1/3] rcu: add RCU library supporting QSBR
> > mechanism
> >
> > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli wrote:
> > > Add RCU library supporting quiescent state based memory reclamation
> > method.
> > > This library helps identify the quiescent state of the reader threads
> > > so that the writers can free the memory associated with the lock less
> > > data structures.
> >
> > I don't see any sign of read-side markers (rcu_read_lock() and
> > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> >
> > Yes, strictly speaking, these are not needed for QSBR to operate, but they
> These APIs would be empty for QSBR.
>
> > make it way easier to maintain and debug code using RCU. For example,
> > given the read-side markers, you can check for errors like having a call to
> > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > Without those read-side markers, life can be quite hard and you will really
> > hate yourself for failing to have provided them.
>
> Want to make sure I understood this, do you mean the application would mark before and after accessing the shared data structure on the reader side?
>
> rte_rcu_qsbr_lock()
> <begin access shared data structure>
> ...
> ...
> <end access shared data structure>
> rte_rcu_qsbr_unlock()
Yes, that is the idea.
> If someone is debugging this code, they have to make sure that there is an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in between.
> It sounds good to me. Obviously, they will not add any additional cycles as well.
> Please let me know if my understanding is correct.
Yes. And in some sort of debug mode, you could capture the counter at
rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time. If
the counter has advanced too far (more than one, if I am not too confused)
there is a bug. Also in debug mode, you could have rte_rcu_qsbr_lock()
increment a per-thread counter and rte_rcu_qsbr_unlock() decrement it.
If the counter is non-zero at a quiescent state, there is a bug.
And so on.
> > Some additional questions and comments interspersed.
> >
> > Thanx, Paul
> >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > ---
> > > MAINTAINERS | 5 +
> > > config/common_base | 6 +
> > > lib/Makefile | 2 +
> > > lib/librte_rcu/Makefile | 23 ++
> > > lib/librte_rcu/meson.build | 5 +
> > > lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
> > > lib/librte_rcu/rte_rcu_qsbr.h | 554
> > +++++++++++++++++++++++++++++
> > > lib/librte_rcu/rte_rcu_version.map | 11 +
> > > lib/meson.build | 2 +-
> > > mk/rte.app.mk | 1 +
> > > 10 files changed, 845 insertions(+), 1 deletion(-) create mode
> > > 100644 lib/librte_rcu/Makefile create mode 100644
> > > lib/librte_rcu/meson.build create mode 100644
> > > lib/librte_rcu/rte_rcu_qsbr.c create mode 100644
> > > lib/librte_rcu/rte_rcu_qsbr.h create mode 100644
> > > lib/librte_rcu/rte_rcu_version.map
> > >
> > > diff --git a/MAINTAINERS b/MAINTAINERS index 9774344dd..6e9766eed
> > > 100644
> > > --- a/MAINTAINERS
> > > +++ b/MAINTAINERS
> > > @@ -1267,6 +1267,11 @@ F: examples/bpf/
> > > F: app/test/test_bpf.c
> > > F: doc/guides/prog_guide/bpf_lib.rst
> > >
> > > +RCU - EXPERIMENTAL
> > > +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > +F: lib/librte_rcu/
> > > +F: doc/guides/prog_guide/rcu_lib.rst
> > > +
> > >
> > > Test Applications
> > > -----------------
> > > diff --git a/config/common_base b/config/common_base index
> > > 8da08105b..ad70c79e1 100644
> > > --- a/config/common_base
> > > +++ b/config/common_base
> > > @@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y #
> > > CONFIG_RTE_LIBRTE_TELEMETRY=n
> > >
> > > +#
> > > +# Compile librte_rcu
> > > +#
> > > +CONFIG_RTE_LIBRTE_RCU=y
> > > +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> > > +
> > > #
> > > # Compile librte_lpm
> > > #
> > > diff --git a/lib/Makefile b/lib/Makefile index 26021d0c0..791e0d991
> > > 100644
> > > --- a/lib/Makefile
> > > +++ b/lib/Makefile
> > > @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) +=
> > librte_ipsec
> > > DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev
> > > librte_security
> > > DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> > > DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> > > +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu DEPDIRS-librte_rcu :=
> > > +librte_eal
> > >
> > > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > > DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git
> > > a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile new file mode
> > > 100644 index 000000000..6aa677bd1
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/Makefile
> > > @@ -0,0 +1,23 @@
> > > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > > +Limited
> > > +
> > > +include $(RTE_SDK)/mk/rte.vars.mk
> > > +
> > > +# library name
> > > +LIB = librte_rcu.a
> > > +
> > > +CFLAGS += -DALLOW_EXPERIMENTAL_API
> > > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 LDLIBS += -lrte_eal
> > > +
> > > +EXPORT_MAP := rte_rcu_version.map
> > > +
> > > +LIBABIVER := 1
> > > +
> > > +# all source are stored in SRCS-y
> > > +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> > > +
> > > +# install includes
> > > +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> > > +
> > > +include $(RTE_SDK)/mk/rte.lib.mk
> > > diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> > > new file mode 100644 index 000000000..c009ae4b7
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/meson.build
> > > @@ -0,0 +1,5 @@
> > > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > > +Limited
> > > +
> > > +sources = files('rte_rcu_qsbr.c')
> > > +headers = files('rte_rcu_qsbr.h')
> > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.c
> > > b/lib/librte_rcu/rte_rcu_qsbr.c new file mode 100644 index
> > > 000000000..53d08446a
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> > > @@ -0,0 +1,237 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + *
> > > + * Copyright (c) 2018 Arm Limited
> > > + */
> > > +
> > > +#include <stdio.h>
> > > +#include <string.h>
> > > +#include <stdint.h>
> > > +#include <errno.h>
> > > +
> > > +#include <rte_common.h>
> > > +#include <rte_log.h>
> > > +#include <rte_memory.h>
> > > +#include <rte_malloc.h>
> > > +#include <rte_eal.h>
> > > +#include <rte_eal_memconfig.h>
> > > +#include <rte_atomic.h>
> > > +#include <rte_per_lcore.h>
> > > +#include <rte_lcore.h>
> > > +#include <rte_errno.h>
> > > +
> > > +#include "rte_rcu_qsbr.h"
> > > +
> > > +/* Get the memory size of QSBR variable */ size_t __rte_experimental
> > > +rte_rcu_qsbr_get_memsize(uint32_t max_threads) {
> > > + size_t sz;
> > > +
> > > + if (max_threads == 0) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid max_threads %u\n",
> > > + __func__, max_threads);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + sz = sizeof(struct rte_rcu_qsbr);
> > > +
> > > + /* Add the size of quiescent state counter array */
> > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > +
> > > + /* Add the size of the registered thread ID bitmap array */
> > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > +
> > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> >
> > Given that you align here, should you also align in the earlier steps in the
> > computation of sz?
>
> Agree. I will remove the align here and keep the earlier one as the intent is to align the thread ID array.
Sounds good!
> > > +}
> > > +
> > > +/* Initialize a quiescent state variable */ int __rte_experimental
> > > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) {
> > > + size_t sz;
> > > +
> > > + if (v == NULL) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > + if (sz == 1)
> > > + return 1;
> > > +
> > > + /* Set all the threads to offline */
> > > + memset(v, 0, sz);
> >
> > We calculate sz here, but it looks like the caller must also calculate it in
> > order to correctly allocate the memory referenced by the "v" argument to
> > this function, with bad things happening if the two calculations get
> > different results. Should "v" instead be allocated within this function to
> > avoid this sort of problem?
>
> Earlier version allocated the memory with-in this library. However, it was decided to go with the current implementation as it provides flexibility for the application to manage the memory as it sees fit. For ex: it could allocate this as part of another structure in a single allocation. This also falls inline with similar approach taken in other libraries.
So the allocator APIs vary too much to allow a pointer to the desired
allocator function to be passed in? Or do you also want to allow static
allocation? If the latter, would a DEFINE_RTE_RCU_QSBR() be of use?
> > > + v->max_threads = max_threads;
> > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > + v->token = RTE_QSBR_CNT_INIT;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/* Register a reader thread to report its quiescent state
> > > + * on a QS variable.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + unsigned int i, id, success;
> > > + uint64_t old_bmap, new_bmap;
> > > +
> > > + if (v == NULL || thread_id >= v->max_threads) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > +
> > > + /* Make sure that the counter for registered threads does not
> > > + * go out of sync. Hence, additional checks are required.
> > > + */
> > > + /* Check if the thread is already registered */
> > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + __ATOMIC_RELAXED);
> > > + if (old_bmap & 1UL << id)
> > > + return 0;
> > > +
> > > + do {
> > > + new_bmap = old_bmap | (1UL << id);
> > > + success = __atomic_compare_exchange(
> > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + &old_bmap, &new_bmap, 0,
> > > + __ATOMIC_RELEASE,
> > __ATOMIC_RELAXED);
> > > +
> > > + if (success)
> > > + __atomic_fetch_add(&v->num_threads,
> > > + 1, __ATOMIC_RELAXED);
> > > + else if (old_bmap & (1UL << id))
> > > + /* Someone else registered this thread.
> > > + * Counter should not be incremented.
> > > + */
> > > + return 0;
> > > + } while (success == 0);
> >
> > This would be simpler if threads were required to register themselves.
> > Maybe you have use cases requiring registration of other threads, but this
> > capability is adding significant complexity, so it might be worth some
> > thought.
> >
> It was simple earlier (__atomic_fetch_or). The complexity is added as 'num_threads' should not go out of sync.
Hmmm...
So threads are allowed to register other threads? Or is there some other
reason that concurrent registration is required?
> > > + return 0;
> > > +}
> > > +
> > > +/* Remove a reader thread, from the list of threads reporting their
> > > + * quiescent state on a QS variable.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + unsigned int i, id, success;
> > > + uint64_t old_bmap, new_bmap;
> > > +
> > > + if (v == NULL || thread_id >= v->max_threads) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > +
> > > + /* Make sure that the counter for registered threads does not
> > > + * go out of sync. Hence, additional checks are required.
> > > + */
> > > + /* Check if the thread is already unregistered */
> > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + __ATOMIC_RELAXED);
> > > + if (old_bmap & ~(1UL << id))
> > > + return 0;
> > > +
> > > + do {
> > > + new_bmap = old_bmap & ~(1UL << id);
> > > + /* Make sure any loads of the shared data structure are
> > > + * completed before removal of the thread from the list of
> > > + * reporting threads.
> > > + */
> > > + success = __atomic_compare_exchange(
> > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + &old_bmap, &new_bmap, 0,
> > > + __ATOMIC_RELEASE,
> > __ATOMIC_RELAXED);
> > > +
> > > + if (success)
> > > + __atomic_fetch_sub(&v->num_threads,
> > > + 1, __ATOMIC_RELAXED);
> > > + else if (old_bmap & ~(1UL << id))
> > > + /* Someone else unregistered this thread.
> > > + * Counter should not be incremented.
> > > + */
> > > + return 0;
> > > + } while (success == 0);
> >
> > Ditto!
> >
> > > + return 0;
> > > +}
> > > +
> > > +/* Dump the details of a single quiescent state variable to a file.
> > > +*/ int __rte_experimental rte_rcu_qsbr_dump(FILE *f, struct
> > > +rte_rcu_qsbr *v) {
> > > + uint64_t bmap;
> > > + uint32_t i, t;
> > > +
> > > + if (v == NULL || f == NULL) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> > > +
> > > + fprintf(f, " QS variable memory size = %lu\n",
> > > + rte_rcu_qsbr_get_memsize(v-
> > >max_threads));
> > > + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> > > + fprintf(f, " Current # threads = %u\n", v->num_threads);
> > > +
> > > + fprintf(f, " Registered thread ID mask = 0x");
> > > + for (i = 0; i < v->num_elems; i++)
> > > + fprintf(f, "%lx", __atomic_load_n(
> > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + __ATOMIC_ACQUIRE));
> > > + fprintf(f, "\n");
> > > +
> > > + fprintf(f, " Token = %lu\n",
> > > + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> > > +
> > > + fprintf(f, "Quiescent State Counts for readers:\n");
> > > + for (i = 0; i < v->num_elems; i++) {
> > > + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v,
> > i),
> > > + __ATOMIC_ACQUIRE);
> > > + while (bmap) {
> > > + t = __builtin_ctzl(bmap);
> > > + fprintf(f, "thread ID = %d, count = %lu\n", t,
> > > + __atomic_load_n(
> > > + &v->qsbr_cnt[i].cnt,
> > > + __ATOMIC_RELAXED));
> > > + bmap &= ~(1UL << t);
> > > + }
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +int rcu_log_type;
> > > +
> > > +RTE_INIT(rte_rcu_register)
> > > +{
> > > + rcu_log_type = rte_log_register("lib.rcu");
> > > + if (rcu_log_type >= 0)
> > > + rte_log_set_level(rcu_log_type, RTE_LOG_ERR); }
> > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > 000000000..ff696aeab
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > @@ -0,0 +1,554 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + * Copyright (c) 2018 Arm Limited
> > > + */
> > > +
> > > +#ifndef _RTE_RCU_QSBR_H_
> > > +#define _RTE_RCU_QSBR_H_
> > > +
> > > +/**
> > > + * @file
> > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > + *
> > > + * Quiescent State (QS) is any point in the thread execution
> > > + * where the thread does not hold a reference to a data structure
> > > + * in shared memory. While using lock-less data structures, the
> > > +writer
> > > + * can safely free memory once all the reader threads have entered
> > > + * quiescent state.
> > > + *
> > > + * This library provides the ability for the readers to report
> > > +quiescent
> > > + * state and for the writers to identify when all the readers have
> > > + * entered quiescent state.
> > > + */
> > > +
> > > +#ifdef __cplusplus
> > > +extern "C" {
> > > +#endif
> > > +
> > > +#include <stdio.h>
> > > +#include <stdint.h>
> > > +#include <errno.h>
> > > +#include <rte_common.h>
> > > +#include <rte_memory.h>
> > > +#include <rte_lcore.h>
> > > +#include <rte_debug.h>
> > > +#include <rte_atomic.h>
> > > +
> > > +extern int rcu_log_type;
> > > +
> > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define RCU_DP_LOG(level,
> > fmt,
> > > +args...) \
> > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > +
> > > +/* Registered thread IDs are stored as a bitmap of 64b element array.
> > > + * Given thread id needs to be converted to index into the array and
> > > + * the id within the array element.
> > > + */
> > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > #define
> > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > RTE_CACHE_LINE_SIZE) #define
> > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i) #define
> > > +RTE_QSBR_THRID_INDEX_SHIFT 6 #define RTE_QSBR_THRID_MASK 0x3f
> > #define
> > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > +
> > > +/* Worker thread counter */
> > > +struct rte_rcu_qsbr_cnt {
> > > + uint64_t cnt;
> > > + /**< Quiescent state counter. Value 0 indicates the thread is
> > > +offline */ } __rte_cache_aligned;
> > > +
> > > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > > +#define RTE_QSBR_CNT_INIT 1
> > > +
> > > +/* RTE Quiescent State variable structure.
> > > + * This structure has two elements that vary in size based on the
> > > + * 'max_threads' parameter.
> > > + * 1) Quiescent state counter array
> > > + * 2) Register thread ID array
> > > + */
> > > +struct rte_rcu_qsbr {
> > > + uint64_t token __rte_cache_aligned;
> > > + /**< Counter to allow for multiple concurrent quiescent state
> > > +queries */
> > > +
> > > + uint32_t num_elems __rte_cache_aligned;
> > > + /**< Number of elements in the thread ID array */
> > > + uint32_t num_threads;
> > > + /**< Number of threads currently using this QS variable */
> > > + uint32_t max_threads;
> > > + /**< Maximum number of threads using this QS variable */
> > > +
> > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > + /**< Quiescent state counter array of 'max_threads' elements */
> > > +
> > > + /**< Registered thread IDs are stored in a bitmap array,
> > > + * after the quiescent state counter array.
> > > + */
> > > +} __rte_cache_aligned;
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Return the size of the memory occupied by a Quiescent State
> > variable.
> > > + *
> > > + * @param max_threads
> > > + * Maximum number of threads reporting quiescent state on this
> > variable.
> > > + * @return
> > > + * On success - size of memory in bytes required for this QS variable.
> > > + * On error - 1 with error code set in rte_errno.
> > > + * Possible rte_errno codes are:
> > > + * - EINVAL - max_threads is 0
> > > + */
> > > +size_t __rte_experimental
> > > +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Initialize a Quiescent State (QS) variable.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param max_threads
> > > + * Maximum number of threads reporting quiescent state on this
> > variable.
> > > + * This should be the same value as passed to
> > rte_rcu_qsbr_get_memsize.
> > > + * @return
> > > + * On success - 0
> > > + * On error - 1 with error code set in rte_errno.
> > > + * Possible rte_errno codes are:
> > > + * - EINVAL - max_threads is 0 or 'v' is NULL.
> > > + *
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Register a reader thread to report its quiescent state
> > > + * on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe.
> > > + * Any reader thread that wants to report its quiescent state must
> > > + * call this API. This can be called during initialization or as part
> > > + * of the packet processing loop.
> > > + *
> > > + * Note that rte_rcu_qsbr_thread_online must be called before the
> > > + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Reader thread with this thread ID will report its quiescent state on
> > > + * the QS variable. thread_id is a value between 0 and (max_threads -
> > 1).
> > > + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Remove a reader thread, from the list of threads reporting their
> > > + * quiescent state on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > + * This API can be called from the reader threads during shutdown.
> > > + * Ongoing quiescent state queries will stop waiting for the status
> > > +from this
> > > + * unregistered reader thread.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Reader thread with this thread ID will stop reporting its quiescent
> > > + * state on the QS variable.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Add a registered reader thread, to the list of threads reporting
> > > +their
> > > + * quiescent state on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe.
> > > + *
> > > + * Any registered reader thread that wants to report its quiescent
> > > +state must
> > > + * call this API before calling rte_rcu_qsbr_quiescent. This can be
> > > +called
> > > + * during initialization or as part of the packet processing loop.
> > > + *
> > > + * The reader thread must call rte_rcu_thread_offline API, before
> > > + * calling any functions that block, to ensure that
> > > +rte_rcu_qsbr_check
> > > + * API does not wait indefinitely for the reader thread to update its QS.
> > > + *
> > > + * The reader thread must call rte_rcu_thread_online API, after the
> > > +blocking
> > > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > > + * waits for the reader thread to update its quiescent state.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Reader thread with this thread ID will report its quiescent state on
> > > + * the QS variable.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id)
> >
> > I am not clear on why this function should be inline. Or do you have use
> > cases where threads go offline and come back online extremely frequently?
>
> Yes, there are use cases where the function call to receive the packets can block.
OK.
> > > +{
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > > +
> > > + /* Copy the current value of token.
> > > + * The fence at the end of the function will ensure that
> > > + * the following will not move down after the load of any shared
> > > + * data structure.
> > > + */
> > > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > > +
> > > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > > + * 'cnt' (64b) is accessed atomically.
> > > + */
> > > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > > + t, __ATOMIC_RELAXED);
> > > +
> > > + /* The subsequent load of the data structure should not
> > > + * move above the store. Hence a store-load barrier
> > > + * is required.
> > > + * If the load of the data structure moves above the store,
> > > + * writer might not see that the reader is online, even though
> > > + * the reader is referencing the shared data structure.
> > > + */
> > > +#ifdef RTE_ARCH_X86_64
> > > + /* rte_smp_mb() for x86 is lighter */
> > > + rte_smp_mb();
> > > +#else
> > > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> > > +#endif
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Remove a registered reader thread from the list of threads
> > > +reporting their
> > > + * quiescent state on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe.
> > > + *
> > > + * This can be called during initialization or as part of the packet
> > > + * processing loop.
> > > + *
> > > + * The reader thread must call rte_rcu_thread_offline API, before
> > > + * calling any functions that block, to ensure that
> > > +rte_rcu_qsbr_check
> > > + * API does not wait indefinitely for the reader thread to update its QS.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * rte_rcu_qsbr_check API will not wait for the reader thread with
> > > + * this thread ID to report its quiescent state on the QS variable.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id)
> >
> > Same here on inlining.
> >
> > > +{
> > > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > > +
> > > + /* The reader can go offline only after the load of the
> > > + * data structure is completed. i.e. any load of the
> > > + * data strcture can not move after this store.
> > > + */
> > > +
> > > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > > + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE); }
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Ask the reader threads to report the quiescent state
> > > + * status.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe and can be called from worker threads.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @return
> > > + * - This is the token for this call of the API. This should be
> > > + * passed to rte_rcu_qsbr_check API.
> > > + */
> > > +static __rte_always_inline uint64_t __rte_experimental
> > > +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v) {
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL);
> > > +
> > > + /* Release the changes to the shared data structure.
> > > + * This store release will ensure that changes to any data
> > > + * structure are visible to the workers before the token
> > > + * update is visible.
> > > + */
> > > + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> > > +
> > > + return t;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Update quiescent state for a reader thread.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > + * All the reader threads registered to report their quiescent state
> > > + * on the QS variable must call this API.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Update the quiescent state for the reader with this thread ID.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > > +
> > > + /* Acquire the changes to the shared data structure released
> > > + * by rte_rcu_qsbr_start.
> > > + * Later loads of the shared data structure should not move
> > > + * above this load. Hence, use load-acquire.
> > > + */
> > > + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> > > +
> > > + /* Inform the writer that updates are visible to this reader.
> > > + * Prior loads of the shared data structure should not move
> > > + * beyond this store. Hence use store-release.
> > > + */
> > > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > > + t, __ATOMIC_RELEASE);
> > > +
> > > + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> > > + __func__, t, thread_id);
> > > +}
> > > +
> > > +/* Check the quiescent state counter for registered threads only,
> > > +assuming
> > > + * that not all threads have registered.
> > > + */
> > > +static __rte_always_inline int
> > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool
> > > +wait) {
> > > + uint32_t i, j, id;
> > > + uint64_t bmap;
> > > + uint64_t c;
> > > + uint64_t *reg_thread_id;
> > > +
> > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > > + i < v->num_elems;
> > > + i++, reg_thread_id++) {
> > > + /* Load the current registered thread bit map before
> > > + * loading the reader thread quiescent state counters.
> > > + */
> > > + bmap = __atomic_load_n(reg_thread_id,
> > __ATOMIC_ACQUIRE);
> > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > +
> > > + while (bmap) {
> > > + j = __builtin_ctzl(bmap);
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: check: token = %lu, wait = %d, Bit Map
> > = 0x%lx, Thread ID = %d",
> > > + __func__, t, wait, bmap, id + j);
> > > + c = __atomic_load_n(
> > > + &v->qsbr_cnt[id + j].cnt,
> > > + __ATOMIC_ACQUIRE);
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: status: token = %lu, wait = %d, Thread
> > QS cnt = %lu, Thread ID = %d",
> > > + __func__, t, wait, c, id+j);
> > > + /* Counter is not checked for wrap-around
> > condition
> > > + * as it is a 64b counter.
> > > + */
> > > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> > < t)) {
> >
> > This assumes that a 64-bit counter won't overflow, which is close enough
> > to true given current CPU clock frequencies. ;-)
> >
> > > + /* This thread is not in quiescent state */
> > > + if (!wait)
> > > + return 0;
> > > +
> > > + rte_pause();
> > > + /* This thread might have unregistered.
> > > + * Re-read the bitmap.
> > > + */
> > > + bmap = __atomic_load_n(reg_thread_id,
> > > + __ATOMIC_ACQUIRE);
> > > +
> > > + continue;
> > > + }
> > > +
> > > + bmap &= ~(1UL << j);
> > > + }
> > > + }
> > > +
> > > + return 1;
> > > +}
> > > +
> > > +/* Check the quiescent state counter for all threads, assuming that
> > > + * all the threads have registered.
> > > + */
> > > +static __rte_always_inline int
> > > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> >
> > Does checking the bitmap really take long enough to make this worthwhile
> > as a separate function? I would think that the bitmap-checking time
> > would be lost in the noise of cache misses from the ->cnt loads.
>
> It avoids accessing one cache line. I think this is where the savings are (may be in theory). This is the most probable use case.
> On the other hand, __rcu_qsbr_check_selective() will result in savings (depending on how many threads are currently registered) by avoiding accessing unwanted counters.
Do you really expect to be calling this function on any kind of fastpath?
> > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in the
> > absence of readers, you might see __rcu_qsbr_check_all() being a bit
> > faster. But is that really what DPDK does?
> I see improvements in the synthetic test case (similar to the one you have described, around 27%). However, in the more practical test cases I do not see any difference.
If the performance improvement only occurs in a synthetic test case,
does it really make sense to optimize for it?
> > > +{
> > > + uint32_t i;
> > > + struct rte_rcu_qsbr_cnt *cnt;
> > > + uint64_t c;
> > > +
> > > + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> > > + __func__, t, wait, i);
> > > + while (1) {
> > > + c = __atomic_load_n(&cnt->cnt,
> > __ATOMIC_ACQUIRE);
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: status: token = %lu, wait = %d, Thread
> > QS cnt = %lu, Thread ID = %d",
> > > + __func__, t, wait, c, i);
> > > + /* Counter is not checked for wrap-around
> > condition
> > > + * as it is a 64b counter.
> > > + */
> > > + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >=
> > t))
> > > + break;
> > > +
> > > + /* This thread is not in quiescent state */
> > > + if (!wait)
> > > + return 0;
> > > +
> > > + rte_pause();
> > > + }
> > > + }
> > > +
> > > + return 1;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Checks if all the reader threads have entered the quiescent state
> > > + * referenced by token.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe and can be called from the worker threads as well.
> > > + *
> > > + * If this API is called with 'wait' set to true, the following
> > > + * factors must be considered:
> > > + *
> > > + * 1) If the calling thread is also reporting the status on the
> > > + * same QS variable, it must update the quiescent state status,
> > > +before
> > > + * calling this API.
> > > + *
> > > + * 2) In addition, while calling from multiple threads, only
> > > + * one of those threads can be reporting the quiescent state status
> > > + * on a given QS variable.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param t
> > > + * Token returned by rte_rcu_qsbr_start API
> > > + * @param wait
> > > + * If true, block till all the reader threads have completed entering
> > > + * the quiescent state referenced by token 't'.
> > > + * @return
> > > + * - 0 if all reader threads have NOT passed through specified number
> > > + * of quiescent states.
> > > + * - 1 if all reader threads have passed through specified number
> > > + * of quiescent states.
> > > + */
> > > +static __rte_always_inline int __rte_experimental
> > > +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait) {
> > > + RTE_ASSERT(v != NULL);
> > > +
> > > + if (likely(v->num_threads == v->max_threads))
> > > + return __rcu_qsbr_check_all(v, t, wait);
> > > + else
> > > + return __rcu_qsbr_check_selective(v, t, wait); }
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Wait till the reader threads have entered quiescent state.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > + * This API can be thought of as a wrapper around rte_rcu_qsbr_start
> > > +and
> > > + * rte_rcu_qsbr_check APIs.
> > > + *
> > > + * If this API is called from multiple threads, only one of
> > > + * those threads can be reporting the quiescent state status on a
> > > + * given QS variable.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Thread ID of the caller if it is registered to report quiescent state
> > > + * on this QS variable (i.e. the calling thread is also part of the
> > > + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL);
> > > +
> > > + t = rte_rcu_qsbr_start(v);
> > > +
> > > + /* If the current thread has readside critical section,
> > > + * update its quiescent state status.
> > > + */
> > > + if (thread_id != RTE_QSBR_THRID_INVALID)
> > > + rte_rcu_qsbr_quiescent(v, thread_id);
> > > +
> > > + /* Wait for other readers to enter quiescent state */
> > > + rte_rcu_qsbr_check(v, t, true);
> >
> > And you are presumably relying on 64-bit counters to avoid the need to
> > execute the above code twice in succession. Which again works given
> > current CPU clock rates combined with system and human lifespans.
> > Otherwise, there are interesting race conditions that can happen, so don't
> > try this with a 32-bit counter!!!
>
> Yes. I am relying on 64-bit counters to avoid having to spend cycles (and time).
>
> > (But think of the great^N grandchildren!!!)
>
> (It is an interesting thought. I wonder what would happen to all the code we are writing today 😊)
I suspect that most systems will be rebooted more than once per decade,
so unless CPU core clock rates manage to go up another order of magnitude,
we should be just fine.
Famous last words! ;-)
> > More seriously, a comment warning people not to make the counter be 32
> > bits is in order.
> Agree, I will add it in the structure definition.
Sounds good!
Thanx, Paul
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Dump the details of a single QS variables to a file.
> > > + *
> > > + * It is NOT multi-thread safe.
> > > + *
> > > + * @param f
> > > + * A pointer to a file for output
> > > + * @param v
> > > + * QS variable
> > > + * @return
> > > + * On success - 0
> > > + * On error - 1 with error code set in rte_errno.
> > > + * Possible rte_errno codes are:
> > > + * - EINVAL - NULL parameters are passed
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> > > +
> > > +#ifdef __cplusplus
> > > +}
> > > +#endif
> > > +
> > > +#endif /* _RTE_RCU_QSBR_H_ */
> > > diff --git a/lib/librte_rcu/rte_rcu_version.map
> > > b/lib/librte_rcu/rte_rcu_version.map
> > > new file mode 100644
> > > index 000000000..ad8cb517c
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/rte_rcu_version.map
> > > @@ -0,0 +1,11 @@
> > > +EXPERIMENTAL {
> > > + global:
> > > +
> > > + rte_rcu_qsbr_get_memsize;
> > > + rte_rcu_qsbr_init;
> > > + rte_rcu_qsbr_thread_register;
> > > + rte_rcu_qsbr_thread_unregister;
> > > + rte_rcu_qsbr_dump;
> > > +
> > > + local: *;
> > > +};
> > > diff --git a/lib/meson.build b/lib/meson.build index
> > > 595314d7d..67be10659 100644
> > > --- a/lib/meson.build
> > > +++ b/lib/meson.build
> > > @@ -22,7 +22,7 @@ libraries = [
> > > 'gro', 'gso', 'ip_frag', 'jobstats',
> > > 'kni', 'latencystats', 'lpm', 'member',
> > > 'power', 'pdump', 'rawdev',
> > > - 'reorder', 'sched', 'security', 'stack', 'vhost',
> > > + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> > > #ipsec lib depends on crypto and security
> > > 'ipsec',
> > > # add pkt framework libs which use other libs from above diff --git
> > > a/mk/rte.app.mk b/mk/rte.app.mk index 7d994bece..e93cc366d 100644
> > > --- a/mk/rte.app.mk
> > > +++ b/mk/rte.app.mk
> > > @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -
> > lrte_eal
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> > > +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
> > >
> > > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> > > --
> > > 2.17.1
> > >
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-11 15:26 ` Paul E. McKenney
@ 2019-04-11 15:26 ` Paul E. McKenney
2019-04-12 20:21 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-11 15:26 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Thu, Apr 11, 2019 at 04:35:04AM +0000, Honnappa Nagarahalli wrote:
> Hi Paul,
> Thank you for your feedback.
>
> > -----Original Message-----
> > From: Paul E. McKenney <paulmck@linux.ibm.com>
> > Sent: Wednesday, April 10, 2019 1:15 PM
> > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Cc: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> > marko.kovacevic@intel.com; dev@dpdk.org; Gavin Hu (Arm Technology
> > China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> > <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> > Subject: Re: [PATCH v4 1/3] rcu: add RCU library supporting QSBR
> > mechanism
> >
> > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli wrote:
> > > Add RCU library supporting quiescent state based memory reclamation
> > method.
> > > This library helps identify the quiescent state of the reader threads
> > > so that the writers can free the memory associated with the lock less
> > > data structures.
> >
> > I don't see any sign of read-side markers (rcu_read_lock() and
> > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> >
> > Yes, strictly speaking, these are not needed for QSBR to operate, but they
> These APIs would be empty for QSBR.
>
> > make it way easier to maintain and debug code using RCU. For example,
> > given the read-side markers, you can check for errors like having a call to
> > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > Without those read-side markers, life can be quite hard and you will really
> > hate yourself for failing to have provided them.
>
> Want to make sure I understood this, do you mean the application would mark before and after accessing the shared data structure on the reader side?
>
> rte_rcu_qsbr_lock()
> <begin access shared data structure>
> ...
> ...
> <end access shared data structure>
> rte_rcu_qsbr_unlock()
Yes, that is the idea.
> If someone is debugging this code, they have to make sure that there is an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in between.
> It sounds good to me. Obviously, they will not add any additional cycles as well.
> Please let me know if my understanding is correct.
Yes. And in some sort of debug mode, you could capture the counter at
rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time. If
the counter has advanced too far (more than one, if I am not too confused)
there is a bug. Also in debug mode, you could have rte_rcu_qsbr_lock()
increment a per-thread counter and rte_rcu_qsbr_unlock() decrement it.
If the counter is non-zero at a quiescent state, there is a bug.
And so on.
> > Some additional questions and comments interspersed.
> >
> > Thanx, Paul
> >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > ---
> > > MAINTAINERS | 5 +
> > > config/common_base | 6 +
> > > lib/Makefile | 2 +
> > > lib/librte_rcu/Makefile | 23 ++
> > > lib/librte_rcu/meson.build | 5 +
> > > lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++++++++++
> > > lib/librte_rcu/rte_rcu_qsbr.h | 554
> > +++++++++++++++++++++++++++++
> > > lib/librte_rcu/rte_rcu_version.map | 11 +
> > > lib/meson.build | 2 +-
> > > mk/rte.app.mk | 1 +
> > > 10 files changed, 845 insertions(+), 1 deletion(-) create mode
> > > 100644 lib/librte_rcu/Makefile create mode 100644
> > > lib/librte_rcu/meson.build create mode 100644
> > > lib/librte_rcu/rte_rcu_qsbr.c create mode 100644
> > > lib/librte_rcu/rte_rcu_qsbr.h create mode 100644
> > > lib/librte_rcu/rte_rcu_version.map
> > >
> > > diff --git a/MAINTAINERS b/MAINTAINERS index 9774344dd..6e9766eed
> > > 100644
> > > --- a/MAINTAINERS
> > > +++ b/MAINTAINERS
> > > @@ -1267,6 +1267,11 @@ F: examples/bpf/
> > > F: app/test/test_bpf.c
> > > F: doc/guides/prog_guide/bpf_lib.rst
> > >
> > > +RCU - EXPERIMENTAL
> > > +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > +F: lib/librte_rcu/
> > > +F: doc/guides/prog_guide/rcu_lib.rst
> > > +
> > >
> > > Test Applications
> > > -----------------
> > > diff --git a/config/common_base b/config/common_base index
> > > 8da08105b..ad70c79e1 100644
> > > --- a/config/common_base
> > > +++ b/config/common_base
> > > @@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y #
> > > CONFIG_RTE_LIBRTE_TELEMETRY=n
> > >
> > > +#
> > > +# Compile librte_rcu
> > > +#
> > > +CONFIG_RTE_LIBRTE_RCU=y
> > > +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> > > +
> > > #
> > > # Compile librte_lpm
> > > #
> > > diff --git a/lib/Makefile b/lib/Makefile index 26021d0c0..791e0d991
> > > 100644
> > > --- a/lib/Makefile
> > > +++ b/lib/Makefile
> > > @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) +=
> > librte_ipsec
> > > DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev
> > > librte_security
> > > DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> > > DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> > > +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu DEPDIRS-librte_rcu :=
> > > +librte_eal
> > >
> > > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > > DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git
> > > a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile new file mode
> > > 100644 index 000000000..6aa677bd1
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/Makefile
> > > @@ -0,0 +1,23 @@
> > > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > > +Limited
> > > +
> > > +include $(RTE_SDK)/mk/rte.vars.mk
> > > +
> > > +# library name
> > > +LIB = librte_rcu.a
> > > +
> > > +CFLAGS += -DALLOW_EXPERIMENTAL_API
> > > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 LDLIBS += -lrte_eal
> > > +
> > > +EXPORT_MAP := rte_rcu_version.map
> > > +
> > > +LIBABIVER := 1
> > > +
> > > +# all source are stored in SRCS-y
> > > +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> > > +
> > > +# install includes
> > > +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> > > +
> > > +include $(RTE_SDK)/mk/rte.lib.mk
> > > diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> > > new file mode 100644 index 000000000..c009ae4b7
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/meson.build
> > > @@ -0,0 +1,5 @@
> > > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm
> > > +Limited
> > > +
> > > +sources = files('rte_rcu_qsbr.c')
> > > +headers = files('rte_rcu_qsbr.h')
> > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.c
> > > b/lib/librte_rcu/rte_rcu_qsbr.c new file mode 100644 index
> > > 000000000..53d08446a
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> > > @@ -0,0 +1,237 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + *
> > > + * Copyright (c) 2018 Arm Limited
> > > + */
> > > +
> > > +#include <stdio.h>
> > > +#include <string.h>
> > > +#include <stdint.h>
> > > +#include <errno.h>
> > > +
> > > +#include <rte_common.h>
> > > +#include <rte_log.h>
> > > +#include <rte_memory.h>
> > > +#include <rte_malloc.h>
> > > +#include <rte_eal.h>
> > > +#include <rte_eal_memconfig.h>
> > > +#include <rte_atomic.h>
> > > +#include <rte_per_lcore.h>
> > > +#include <rte_lcore.h>
> > > +#include <rte_errno.h>
> > > +
> > > +#include "rte_rcu_qsbr.h"
> > > +
> > > +/* Get the memory size of QSBR variable */ size_t __rte_experimental
> > > +rte_rcu_qsbr_get_memsize(uint32_t max_threads) {
> > > + size_t sz;
> > > +
> > > + if (max_threads == 0) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid max_threads %u\n",
> > > + __func__, max_threads);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + sz = sizeof(struct rte_rcu_qsbr);
> > > +
> > > + /* Add the size of quiescent state counter array */
> > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > +
> > > + /* Add the size of the registered thread ID bitmap array */
> > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > +
> > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> >
> > Given that you align here, should you also align in the earlier steps in the
> > computation of sz?
>
> Agree. I will remove the align here and keep the earlier one as the intent is to align the thread ID array.
Sounds good!
> > > +}
> > > +
> > > +/* Initialize a quiescent state variable */ int __rte_experimental
> > > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) {
> > > + size_t sz;
> > > +
> > > + if (v == NULL) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > + if (sz == 1)
> > > + return 1;
> > > +
> > > + /* Set all the threads to offline */
> > > + memset(v, 0, sz);
> >
> > We calculate sz here, but it looks like the caller must also calculate it in
> > order to correctly allocate the memory referenced by the "v" argument to
> > this function, with bad things happening if the two calculations get
> > different results. Should "v" instead be allocated within this function to
> > avoid this sort of problem?
>
> Earlier version allocated the memory with-in this library. However, it was decided to go with the current implementation as it provides flexibility for the application to manage the memory as it sees fit. For ex: it could allocate this as part of another structure in a single allocation. This also falls inline with similar approach taken in other libraries.
So the allocator APIs vary too much to allow a pointer to the desired
allocator function to be passed in? Or do you also want to allow static
allocation? If the latter, would a DEFINE_RTE_RCU_QSBR() be of use?
> > > + v->max_threads = max_threads;
> > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > + v->token = RTE_QSBR_CNT_INIT;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/* Register a reader thread to report its quiescent state
> > > + * on a QS variable.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + unsigned int i, id, success;
> > > + uint64_t old_bmap, new_bmap;
> > > +
> > > + if (v == NULL || thread_id >= v->max_threads) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > +
> > > + /* Make sure that the counter for registered threads does not
> > > + * go out of sync. Hence, additional checks are required.
> > > + */
> > > + /* Check if the thread is already registered */
> > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + __ATOMIC_RELAXED);
> > > + if (old_bmap & 1UL << id)
> > > + return 0;
> > > +
> > > + do {
> > > + new_bmap = old_bmap | (1UL << id);
> > > + success = __atomic_compare_exchange(
> > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + &old_bmap, &new_bmap, 0,
> > > + __ATOMIC_RELEASE,
> > __ATOMIC_RELAXED);
> > > +
> > > + if (success)
> > > + __atomic_fetch_add(&v->num_threads,
> > > + 1, __ATOMIC_RELAXED);
> > > + else if (old_bmap & (1UL << id))
> > > + /* Someone else registered this thread.
> > > + * Counter should not be incremented.
> > > + */
> > > + return 0;
> > > + } while (success == 0);
> >
> > This would be simpler if threads were required to register themselves.
> > Maybe you have use cases requiring registration of other threads, but this
> > capability is adding significant complexity, so it might be worth some
> > thought.
> >
> It was simple earlier (__atomic_fetch_or). The complexity is added as 'num_threads' should not go out of sync.
Hmmm...
So threads are allowed to register other threads? Or is there some other
reason that concurrent registration is required?
> > > + return 0;
> > > +}
> > > +
> > > +/* Remove a reader thread, from the list of threads reporting their
> > > + * quiescent state on a QS variable.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + unsigned int i, id, success;
> > > + uint64_t old_bmap, new_bmap;
> > > +
> > > + if (v == NULL || thread_id >= v->max_threads) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > +
> > > + /* Make sure that the counter for registered threads does not
> > > + * go out of sync. Hence, additional checks are required.
> > > + */
> > > + /* Check if the thread is already unregistered */
> > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + __ATOMIC_RELAXED);
> > > + if (old_bmap & ~(1UL << id))
> > > + return 0;
> > > +
> > > + do {
> > > + new_bmap = old_bmap & ~(1UL << id);
> > > + /* Make sure any loads of the shared data structure are
> > > + * completed before removal of the thread from the list of
> > > + * reporting threads.
> > > + */
> > > + success = __atomic_compare_exchange(
> > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + &old_bmap, &new_bmap, 0,
> > > + __ATOMIC_RELEASE,
> > __ATOMIC_RELAXED);
> > > +
> > > + if (success)
> > > + __atomic_fetch_sub(&v->num_threads,
> > > + 1, __ATOMIC_RELAXED);
> > > + else if (old_bmap & ~(1UL << id))
> > > + /* Someone else unregistered this thread.
> > > + * Counter should not be incremented.
> > > + */
> > > + return 0;
> > > + } while (success == 0);
> >
> > Ditto!
> >
> > > + return 0;
> > > +}
> > > +
> > > +/* Dump the details of a single quiescent state variable to a file.
> > > +*/ int __rte_experimental rte_rcu_qsbr_dump(FILE *f, struct
> > > +rte_rcu_qsbr *v) {
> > > + uint64_t bmap;
> > > + uint32_t i, t;
> > > +
> > > + if (v == NULL || f == NULL) {
> > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > + "%s(): Invalid input parameter\n", __func__);
> > > + rte_errno = EINVAL;
> > > +
> > > + return 1;
> > > + }
> > > +
> > > + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> > > +
> > > + fprintf(f, " QS variable memory size = %lu\n",
> > > + rte_rcu_qsbr_get_memsize(v-
> > >max_threads));
> > > + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> > > + fprintf(f, " Current # threads = %u\n", v->num_threads);
> > > +
> > > + fprintf(f, " Registered thread ID mask = 0x");
> > > + for (i = 0; i < v->num_elems; i++)
> > > + fprintf(f, "%lx", __atomic_load_n(
> > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > + __ATOMIC_ACQUIRE));
> > > + fprintf(f, "\n");
> > > +
> > > + fprintf(f, " Token = %lu\n",
> > > + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> > > +
> > > + fprintf(f, "Quiescent State Counts for readers:\n");
> > > + for (i = 0; i < v->num_elems; i++) {
> > > + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v,
> > i),
> > > + __ATOMIC_ACQUIRE);
> > > + while (bmap) {
> > > + t = __builtin_ctzl(bmap);
> > > + fprintf(f, "thread ID = %d, count = %lu\n", t,
> > > + __atomic_load_n(
> > > + &v->qsbr_cnt[i].cnt,
> > > + __ATOMIC_RELAXED));
> > > + bmap &= ~(1UL << t);
> > > + }
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +int rcu_log_type;
> > > +
> > > +RTE_INIT(rte_rcu_register)
> > > +{
> > > + rcu_log_type = rte_log_register("lib.rcu");
> > > + if (rcu_log_type >= 0)
> > > + rte_log_set_level(rcu_log_type, RTE_LOG_ERR); }
> > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > 000000000..ff696aeab
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > @@ -0,0 +1,554 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + * Copyright (c) 2018 Arm Limited
> > > + */
> > > +
> > > +#ifndef _RTE_RCU_QSBR_H_
> > > +#define _RTE_RCU_QSBR_H_
> > > +
> > > +/**
> > > + * @file
> > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > + *
> > > + * Quiescent State (QS) is any point in the thread execution
> > > + * where the thread does not hold a reference to a data structure
> > > + * in shared memory. While using lock-less data structures, the
> > > +writer
> > > + * can safely free memory once all the reader threads have entered
> > > + * quiescent state.
> > > + *
> > > + * This library provides the ability for the readers to report
> > > +quiescent
> > > + * state and for the writers to identify when all the readers have
> > > + * entered quiescent state.
> > > + */
> > > +
> > > +#ifdef __cplusplus
> > > +extern "C" {
> > > +#endif
> > > +
> > > +#include <stdio.h>
> > > +#include <stdint.h>
> > > +#include <errno.h>
> > > +#include <rte_common.h>
> > > +#include <rte_memory.h>
> > > +#include <rte_lcore.h>
> > > +#include <rte_debug.h>
> > > +#include <rte_atomic.h>
> > > +
> > > +extern int rcu_log_type;
> > > +
> > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define RCU_DP_LOG(level,
> > fmt,
> > > +args...) \
> > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > +
> > > +/* Registered thread IDs are stored as a bitmap of 64b element array.
> > > + * Given thread id needs to be converted to index into the array and
> > > + * the id within the array element.
> > > + */
> > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > #define
> > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > RTE_CACHE_LINE_SIZE) #define
> > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i) #define
> > > +RTE_QSBR_THRID_INDEX_SHIFT 6 #define RTE_QSBR_THRID_MASK 0x3f
> > #define
> > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > +
> > > +/* Worker thread counter */
> > > +struct rte_rcu_qsbr_cnt {
> > > + uint64_t cnt;
> > > + /**< Quiescent state counter. Value 0 indicates the thread is
> > > +offline */ } __rte_cache_aligned;
> > > +
> > > +#define RTE_QSBR_CNT_THR_OFFLINE 0
> > > +#define RTE_QSBR_CNT_INIT 1
> > > +
> > > +/* RTE Quiescent State variable structure.
> > > + * This structure has two elements that vary in size based on the
> > > + * 'max_threads' parameter.
> > > + * 1) Quiescent state counter array
> > > + * 2) Register thread ID array
> > > + */
> > > +struct rte_rcu_qsbr {
> > > + uint64_t token __rte_cache_aligned;
> > > + /**< Counter to allow for multiple concurrent quiescent state
> > > +queries */
> > > +
> > > + uint32_t num_elems __rte_cache_aligned;
> > > + /**< Number of elements in the thread ID array */
> > > + uint32_t num_threads;
> > > + /**< Number of threads currently using this QS variable */
> > > + uint32_t max_threads;
> > > + /**< Maximum number of threads using this QS variable */
> > > +
> > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > + /**< Quiescent state counter array of 'max_threads' elements */
> > > +
> > > + /**< Registered thread IDs are stored in a bitmap array,
> > > + * after the quiescent state counter array.
> > > + */
> > > +} __rte_cache_aligned;
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Return the size of the memory occupied by a Quiescent State
> > variable.
> > > + *
> > > + * @param max_threads
> > > + * Maximum number of threads reporting quiescent state on this
> > variable.
> > > + * @return
> > > + * On success - size of memory in bytes required for this QS variable.
> > > + * On error - 1 with error code set in rte_errno.
> > > + * Possible rte_errno codes are:
> > > + * - EINVAL - max_threads is 0
> > > + */
> > > +size_t __rte_experimental
> > > +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Initialize a Quiescent State (QS) variable.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param max_threads
> > > + * Maximum number of threads reporting quiescent state on this
> > variable.
> > > + * This should be the same value as passed to
> > rte_rcu_qsbr_get_memsize.
> > > + * @return
> > > + * On success - 0
> > > + * On error - 1 with error code set in rte_errno.
> > > + * Possible rte_errno codes are:
> > > + * - EINVAL - max_threads is 0 or 'v' is NULL.
> > > + *
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Register a reader thread to report its quiescent state
> > > + * on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe.
> > > + * Any reader thread that wants to report its quiescent state must
> > > + * call this API. This can be called during initialization or as part
> > > + * of the packet processing loop.
> > > + *
> > > + * Note that rte_rcu_qsbr_thread_online must be called before the
> > > + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Reader thread with this thread ID will report its quiescent state on
> > > + * the QS variable. thread_id is a value between 0 and (max_threads -
> > 1).
> > > + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Remove a reader thread, from the list of threads reporting their
> > > + * quiescent state on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > + * This API can be called from the reader threads during shutdown.
> > > + * Ongoing quiescent state queries will stop waiting for the status
> > > +from this
> > > + * unregistered reader thread.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Reader thread with this thread ID will stop reporting its quiescent
> > > + * state on the QS variable.
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id);
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Add a registered reader thread, to the list of threads reporting
> > > +their
> > > + * quiescent state on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe.
> > > + *
> > > + * Any registered reader thread that wants to report its quiescent
> > > +state must
> > > + * call this API before calling rte_rcu_qsbr_quiescent. This can be
> > > +called
> > > + * during initialization or as part of the packet processing loop.
> > > + *
> > > + * The reader thread must call rte_rcu_thread_offline API, before
> > > + * calling any functions that block, to ensure that
> > > +rte_rcu_qsbr_check
> > > + * API does not wait indefinitely for the reader thread to update its QS.
> > > + *
> > > + * The reader thread must call rte_rcu_thread_online API, after the
> > > +blocking
> > > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > > + * waits for the reader thread to update its quiescent state.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Reader thread with this thread ID will report its quiescent state on
> > > + * the QS variable.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id)
> >
> > I am not clear on why this function should be inline. Or do you have use
> > cases where threads go offline and come back online extremely frequently?
>
> Yes, there are use cases where the function call to receive the packets can block.
OK.
> > > +{
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > > +
> > > + /* Copy the current value of token.
> > > + * The fence at the end of the function will ensure that
> > > + * the following will not move down after the load of any shared
> > > + * data structure.
> > > + */
> > > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > > +
> > > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > > + * 'cnt' (64b) is accessed atomically.
> > > + */
> > > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > > + t, __ATOMIC_RELAXED);
> > > +
> > > + /* The subsequent load of the data structure should not
> > > + * move above the store. Hence a store-load barrier
> > > + * is required.
> > > + * If the load of the data structure moves above the store,
> > > + * writer might not see that the reader is online, even though
> > > + * the reader is referencing the shared data structure.
> > > + */
> > > +#ifdef RTE_ARCH_X86_64
> > > + /* rte_smp_mb() for x86 is lighter */
> > > + rte_smp_mb();
> > > +#else
> > > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> > > +#endif
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Remove a registered reader thread from the list of threads
> > > +reporting their
> > > + * quiescent state on a QS variable.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe.
> > > + *
> > > + * This can be called during initialization or as part of the packet
> > > + * processing loop.
> > > + *
> > > + * The reader thread must call rte_rcu_thread_offline API, before
> > > + * calling any functions that block, to ensure that
> > > +rte_rcu_qsbr_check
> > > + * API does not wait indefinitely for the reader thread to update its QS.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * rte_rcu_qsbr_check API will not wait for the reader thread with
> > > + * this thread ID to report its quiescent state on the QS variable.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id)
> >
> > Same here on inlining.
> >
> > > +{
> > > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > > +
> > > + /* The reader can go offline only after the load of the
> > > + * data structure is completed. i.e. any load of the
> > > + * data strcture can not move after this store.
> > > + */
> > > +
> > > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > > + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE); }
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Ask the reader threads to report the quiescent state
> > > + * status.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe and can be called from worker threads.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @return
> > > + * - This is the token for this call of the API. This should be
> > > + * passed to rte_rcu_qsbr_check API.
> > > + */
> > > +static __rte_always_inline uint64_t __rte_experimental
> > > +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v) {
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL);
> > > +
> > > + /* Release the changes to the shared data structure.
> > > + * This store release will ensure that changes to any data
> > > + * structure are visible to the workers before the token
> > > + * update is visible.
> > > + */
> > > + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> > > +
> > > + return t;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Update quiescent state for a reader thread.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > + * All the reader threads registered to report their quiescent state
> > > + * on the QS variable must call this API.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Update the quiescent state for the reader with this thread ID.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > > +
> > > + /* Acquire the changes to the shared data structure released
> > > + * by rte_rcu_qsbr_start.
> > > + * Later loads of the shared data structure should not move
> > > + * above this load. Hence, use load-acquire.
> > > + */
> > > + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> > > +
> > > + /* Inform the writer that updates are visible to this reader.
> > > + * Prior loads of the shared data structure should not move
> > > + * beyond this store. Hence use store-release.
> > > + */
> > > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > > + t, __ATOMIC_RELEASE);
> > > +
> > > + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> > > + __func__, t, thread_id);
> > > +}
> > > +
> > > +/* Check the quiescent state counter for registered threads only,
> > > +assuming
> > > + * that not all threads have registered.
> > > + */
> > > +static __rte_always_inline int
> > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool
> > > +wait) {
> > > + uint32_t i, j, id;
> > > + uint64_t bmap;
> > > + uint64_t c;
> > > + uint64_t *reg_thread_id;
> > > +
> > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > > + i < v->num_elems;
> > > + i++, reg_thread_id++) {
> > > + /* Load the current registered thread bit map before
> > > + * loading the reader thread quiescent state counters.
> > > + */
> > > + bmap = __atomic_load_n(reg_thread_id,
> > __ATOMIC_ACQUIRE);
> > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > +
> > > + while (bmap) {
> > > + j = __builtin_ctzl(bmap);
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: check: token = %lu, wait = %d, Bit Map
> > = 0x%lx, Thread ID = %d",
> > > + __func__, t, wait, bmap, id + j);
> > > + c = __atomic_load_n(
> > > + &v->qsbr_cnt[id + j].cnt,
> > > + __ATOMIC_ACQUIRE);
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: status: token = %lu, wait = %d, Thread
> > QS cnt = %lu, Thread ID = %d",
> > > + __func__, t, wait, c, id+j);
> > > + /* Counter is not checked for wrap-around
> > condition
> > > + * as it is a 64b counter.
> > > + */
> > > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> > < t)) {
> >
> > This assumes that a 64-bit counter won't overflow, which is close enough
> > to true given current CPU clock frequencies. ;-)
> >
> > > + /* This thread is not in quiescent state */
> > > + if (!wait)
> > > + return 0;
> > > +
> > > + rte_pause();
> > > + /* This thread might have unregistered.
> > > + * Re-read the bitmap.
> > > + */
> > > + bmap = __atomic_load_n(reg_thread_id,
> > > + __ATOMIC_ACQUIRE);
> > > +
> > > + continue;
> > > + }
> > > +
> > > + bmap &= ~(1UL << j);
> > > + }
> > > + }
> > > +
> > > + return 1;
> > > +}
> > > +
> > > +/* Check the quiescent state counter for all threads, assuming that
> > > + * all the threads have registered.
> > > + */
> > > +static __rte_always_inline int
> > > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> >
> > Does checking the bitmap really take long enough to make this worthwhile
> > as a separate function? I would think that the bitmap-checking time
> > would be lost in the noise of cache misses from the ->cnt loads.
>
> It avoids accessing one cache line. I think this is where the savings are (may be in theory). This is the most probable use case.
> On the other hand, __rcu_qsbr_check_selective() will result in savings (depending on how many threads are currently registered) by avoiding accessing unwanted counters.
Do you really expect to be calling this function on any kind of fastpath?
> > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in the
> > absence of readers, you might see __rcu_qsbr_check_all() being a bit
> > faster. But is that really what DPDK does?
> I see improvements in the synthetic test case (similar to the one you have described, around 27%). However, in the more practical test cases I do not see any difference.
If the performance improvement only occurs in a synthetic test case,
does it really make sense to optimize for it?
> > > +{
> > > + uint32_t i;
> > > + struct rte_rcu_qsbr_cnt *cnt;
> > > + uint64_t c;
> > > +
> > > + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> > > + __func__, t, wait, i);
> > > + while (1) {
> > > + c = __atomic_load_n(&cnt->cnt,
> > __ATOMIC_ACQUIRE);
> > > + RCU_DP_LOG(DEBUG,
> > > + "%s: status: token = %lu, wait = %d, Thread
> > QS cnt = %lu, Thread ID = %d",
> > > + __func__, t, wait, c, i);
> > > + /* Counter is not checked for wrap-around
> > condition
> > > + * as it is a 64b counter.
> > > + */
> > > + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >=
> > t))
> > > + break;
> > > +
> > > + /* This thread is not in quiescent state */
> > > + if (!wait)
> > > + return 0;
> > > +
> > > + rte_pause();
> > > + }
> > > + }
> > > +
> > > + return 1;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Checks if all the reader threads have entered the quiescent state
> > > + * referenced by token.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread
> > > + * safe and can be called from the worker threads as well.
> > > + *
> > > + * If this API is called with 'wait' set to true, the following
> > > + * factors must be considered:
> > > + *
> > > + * 1) If the calling thread is also reporting the status on the
> > > + * same QS variable, it must update the quiescent state status,
> > > +before
> > > + * calling this API.
> > > + *
> > > + * 2) In addition, while calling from multiple threads, only
> > > + * one of those threads can be reporting the quiescent state status
> > > + * on a given QS variable.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param t
> > > + * Token returned by rte_rcu_qsbr_start API
> > > + * @param wait
> > > + * If true, block till all the reader threads have completed entering
> > > + * the quiescent state referenced by token 't'.
> > > + * @return
> > > + * - 0 if all reader threads have NOT passed through specified number
> > > + * of quiescent states.
> > > + * - 1 if all reader threads have passed through specified number
> > > + * of quiescent states.
> > > + */
> > > +static __rte_always_inline int __rte_experimental
> > > +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait) {
> > > + RTE_ASSERT(v != NULL);
> > > +
> > > + if (likely(v->num_threads == v->max_threads))
> > > + return __rcu_qsbr_check_all(v, t, wait);
> > > + else
> > > + return __rcu_qsbr_check_selective(v, t, wait); }
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Wait till the reader threads have entered quiescent state.
> > > + *
> > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > + * This API can be thought of as a wrapper around rte_rcu_qsbr_start
> > > +and
> > > + * rte_rcu_qsbr_check APIs.
> > > + *
> > > + * If this API is called from multiple threads, only one of
> > > + * those threads can be reporting the quiescent state status on a
> > > + * given QS variable.
> > > + *
> > > + * @param v
> > > + * QS variable
> > > + * @param thread_id
> > > + * Thread ID of the caller if it is registered to report quiescent state
> > > + * on this QS variable (i.e. the calling thread is also part of the
> > > + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> > > + */
> > > +static __rte_always_inline void __rte_experimental
> > > +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int
> > > +thread_id) {
> > > + uint64_t t;
> > > +
> > > + RTE_ASSERT(v != NULL);
> > > +
> > > + t = rte_rcu_qsbr_start(v);
> > > +
> > > + /* If the current thread has readside critical section,
> > > + * update its quiescent state status.
> > > + */
> > > + if (thread_id != RTE_QSBR_THRID_INVALID)
> > > + rte_rcu_qsbr_quiescent(v, thread_id);
> > > +
> > > + /* Wait for other readers to enter quiescent state */
> > > + rte_rcu_qsbr_check(v, t, true);
> >
> > And you are presumably relying on 64-bit counters to avoid the need to
> > execute the above code twice in succession. Which again works given
> > current CPU clock rates combined with system and human lifespans.
> > Otherwise, there are interesting race conditions that can happen, so don't
> > try this with a 32-bit counter!!!
>
> Yes. I am relying on 64-bit counters to avoid having to spend cycles (and time).
>
> > (But think of the great^N grandchildren!!!)
>
> (It is an interesting thought. I wonder what would happen to all the code we are writing today 😊)
I suspect that most systems will be rebooted more than once per decade,
so unless CPU core clock rates manage to go up another order of magnitude,
we should be just fine.
Famous last words! ;-)
> > More seriously, a comment warning people not to make the counter be 32
> > bits is in order.
> Agree, I will add it in the structure definition.
Sounds good!
Thanx, Paul
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > + *
> > > + * Dump the details of a single QS variables to a file.
> > > + *
> > > + * It is NOT multi-thread safe.
> > > + *
> > > + * @param f
> > > + * A pointer to a file for output
> > > + * @param v
> > > + * QS variable
> > > + * @return
> > > + * On success - 0
> > > + * On error - 1 with error code set in rte_errno.
> > > + * Possible rte_errno codes are:
> > > + * - EINVAL - NULL parameters are passed
> > > + */
> > > +int __rte_experimental
> > > +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> > > +
> > > +#ifdef __cplusplus
> > > +}
> > > +#endif
> > > +
> > > +#endif /* _RTE_RCU_QSBR_H_ */
> > > diff --git a/lib/librte_rcu/rte_rcu_version.map
> > > b/lib/librte_rcu/rte_rcu_version.map
> > > new file mode 100644
> > > index 000000000..ad8cb517c
> > > --- /dev/null
> > > +++ b/lib/librte_rcu/rte_rcu_version.map
> > > @@ -0,0 +1,11 @@
> > > +EXPERIMENTAL {
> > > + global:
> > > +
> > > + rte_rcu_qsbr_get_memsize;
> > > + rte_rcu_qsbr_init;
> > > + rte_rcu_qsbr_thread_register;
> > > + rte_rcu_qsbr_thread_unregister;
> > > + rte_rcu_qsbr_dump;
> > > +
> > > + local: *;
> > > +};
> > > diff --git a/lib/meson.build b/lib/meson.build index
> > > 595314d7d..67be10659 100644
> > > --- a/lib/meson.build
> > > +++ b/lib/meson.build
> > > @@ -22,7 +22,7 @@ libraries = [
> > > 'gro', 'gso', 'ip_frag', 'jobstats',
> > > 'kni', 'latencystats', 'lpm', 'member',
> > > 'power', 'pdump', 'rawdev',
> > > - 'reorder', 'sched', 'security', 'stack', 'vhost',
> > > + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> > > #ipsec lib depends on crypto and security
> > > 'ipsec',
> > > # add pkt framework libs which use other libs from above diff --git
> > > a/mk/rte.app.mk b/mk/rte.app.mk index 7d994bece..e93cc366d 100644
> > > --- a/mk/rte.app.mk
> > > +++ b/mk/rte.app.mk
> > > @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -
> > lrte_eal
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> > > +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
> > >
> > > ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> > > _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> > > --
> > > 2.17.1
> > >
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (9 preceding siblings ...)
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
` (4 more replies)
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
` (3 subsequent siblings)
14 siblings, 5 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 645 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3370 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 1/3] rcu: " Honnappa Nagarahalli
` (3 subsequent siblings)
4 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
lib/librte_rcu/rte_rcu_qsbr.h | 645 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3370 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 22:06 ` Stephen Hemminger
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
` (2 subsequent siblings)
4 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 +++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 645 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 936 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 9774344dd..6e9766eed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1267,6 +1267,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 8da08105b..ad70c79e1 100644
--- a/config/common_base
+++ b/config/common_base
@@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..4aeb5f37f
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..304534a2d
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,645 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#define RTE_LIBRTE_RCU_DEBUG
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_WARNING, rcu_log_type,
+ "%s(): Lock counter %u. Nested locks?\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Validate that the lock counter is 0 */
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Lock counter %u, should be 0\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 7d994bece..e93cc366d 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 22:06 ` Stephen Hemminger
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 5 +
lib/librte_rcu/rte_rcu_qsbr.c | 237 +++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 645 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 11 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 936 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 9774344dd..6e9766eed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1267,6 +1267,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 8da08105b..ad70c79e1 100644
--- a/config/common_base
+++ b/config/common_base
@@ -829,6 +829,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..c009ae4b7
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..4aeb5f37f
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..304534a2d
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,645 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#define RTE_LIBRTE_RCU_DEBUG
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_WARNING, rcu_log_type,
+ "%s(): Lock counter %u. Nested locks?\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Validate that the lock counter is 0 */
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Lock counter %u, should be 0\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..ad8cb517c
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,11 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 7d994bece..e93cc366d 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 2/3] test/rcu_qsbr: add API and functional tests
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-15 17:29 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
4 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 +++++++++++++++++++++++
5 files changed, 1736 insertions(+)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index db2527489..5f259e838 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..1a2ee18a5 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -137,6 +139,7 @@ test_deps = ['acl',
'ring',
'stack',
'timer'
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..b16872de5
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..bb3b8e9b6
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,703 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 2/3] test/rcu_qsbr: add API and functional tests
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 +++++++++++++++++++++++
5 files changed, 1736 insertions(+)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index db2527489..5f259e838 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..1a2ee18a5 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -137,6 +139,7 @@ test_deps = ['acl',
'ring',
'stack',
'timer'
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..b16872de5
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..bb3b8e9b6
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,703 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 3/3] doc/rcu: add lib_rcu documentation
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-15 17:29 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
4 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v5 3/3] doc/rcu: add lib_rcu documentation
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-12 20:20 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:20 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-11 15:26 ` Paul E. McKenney
2019-04-11 15:26 ` Paul E. McKenney
@ 2019-04-12 20:21 ` Honnappa Nagarahalli
2019-04-12 20:21 ` Honnappa Nagarahalli
2019-04-15 16:51 ` Ananyev, Konstantin
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:21 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
<snip>
> > >
> > > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli
> wrote:
> > > > Add RCU library supporting quiescent state based memory
> > > > reclamation
> > > method.
> > > > This library helps identify the quiescent state of the reader
> > > > threads so that the writers can free the memory associated with
> > > > the lock less data structures.
> > >
> > > I don't see any sign of read-side markers (rcu_read_lock() and
> > > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> > >
> > > Yes, strictly speaking, these are not needed for QSBR to operate,
> > > but they
> > These APIs would be empty for QSBR.
> >
> > > make it way easier to maintain and debug code using RCU. For
> > > example, given the read-side markers, you can check for errors like
> > > having a call to
> > > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > > Without those read-side markers, life can be quite hard and you will
> > > really hate yourself for failing to have provided them.
> >
> > Want to make sure I understood this, do you mean the application
> would mark before and after accessing the shared data structure on the
> reader side?
> >
> > rte_rcu_qsbr_lock()
> > <begin access shared data structure>
> > ...
> > ...
> > <end access shared data structure>
> > rte_rcu_qsbr_unlock()
>
> Yes, that is the idea.
>
> > If someone is debugging this code, they have to make sure that there is
> an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in
> between.
> > It sounds good to me. Obviously, they will not add any additional cycles
> as well.
> > Please let me know if my understanding is correct.
>
> Yes. And in some sort of debug mode, you could capture the counter at
> rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time. If the
> counter has advanced too far (more than one, if I am not too confused)
> there is a bug. Also in debug mode, you could have rte_rcu_qsbr_lock()
> increment a per-thread counter and rte_rcu_qsbr_unlock() decrement it.
> If the counter is non-zero at a quiescent state, there is a bug.
> And so on.
>
Added this in V5
<snip>
> > > > +
> > > > +/* Get the memory size of QSBR variable */ size_t
> > > > +__rte_experimental rte_rcu_qsbr_get_memsize(uint32_t
> max_threads) {
> > > > + size_t sz;
> > > > +
> > > > + if (max_threads == 0) {
> > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > + "%s(): Invalid max_threads %u\n",
> > > > + __func__, max_threads);
> > > > + rte_errno = EINVAL;
> > > > +
> > > > + return 1;
> > > > + }
> > > > +
> > > > + sz = sizeof(struct rte_rcu_qsbr);
> > > > +
> > > > + /* Add the size of quiescent state counter array */
> > > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > > +
> > > > + /* Add the size of the registered thread ID bitmap array */
> > > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > > +
> > > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> > >
> > > Given that you align here, should you also align in the earlier
> > > steps in the computation of sz?
> >
> > Agree. I will remove the align here and keep the earlier one as the intent
> is to align the thread ID array.
>
> Sounds good!
Added this in V5
>
> > > > +}
> > > > +
> > > > +/* Initialize a quiescent state variable */ int
> > > > +__rte_experimental rte_rcu_qsbr_init(struct rte_rcu_qsbr *v,
> uint32_t max_threads) {
> > > > + size_t sz;
> > > > +
> > > > + if (v == NULL) {
> > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > + "%s(): Invalid input parameter\n", __func__);
> > > > + rte_errno = EINVAL;
> > > > +
> > > > + return 1;
> > > > + }
> > > > +
> > > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > > + if (sz == 1)
> > > > + return 1;
> > > > +
> > > > + /* Set all the threads to offline */
> > > > + memset(v, 0, sz);
> > >
> > > We calculate sz here, but it looks like the caller must also
> > > calculate it in order to correctly allocate the memory referenced by
> > > the "v" argument to this function, with bad things happening if the
> > > two calculations get different results. Should "v" instead be
> > > allocated within this function to avoid this sort of problem?
> >
> > Earlier version allocated the memory with-in this library. However, it was
> decided to go with the current implementation as it provides flexibility for
> the application to manage the memory as it sees fit. For ex: it could
> allocate this as part of another structure in a single allocation. This also
> falls inline with similar approach taken in other libraries.
>
> So the allocator APIs vary too much to allow a pointer to the desired
> allocator function to be passed in? Or do you also want to allow static
> allocation? If the latter, would a DEFINE_RTE_RCU_QSBR() be of use?
>
This is done to allow for allocation of memory for QS variable as part of a another bigger data structure. This will help in not fragmenting the memory. For ex:
struct xyz {
rte_ring *ring;
rte_rcu_qsbr *v;
abc *t;
};
struct xyz c;
Memory for the above structure can be allocated in one chunk by calculating the size required.
In some use cases static allocation might be enough as 'max_threads' might be a compile time constant. I am not sure on how to support both dynamic and static 'max_threads'.
> > > > + v->max_threads = max_threads;
> > > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > > + v->token = RTE_QSBR_CNT_INIT;
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > +/* Register a reader thread to report its quiescent state
> > > > + * on a QS variable.
> > > > + */
> > > > +int __rte_experimental
> > > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > > +thread_id) {
> > > > + unsigned int i, id, success;
> > > > + uint64_t old_bmap, new_bmap;
> > > > +
> > > > + if (v == NULL || thread_id >= v->max_threads) {
> > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > + "%s(): Invalid input parameter\n", __func__);
> > > > + rte_errno = EINVAL;
> > > > +
> > > > + return 1;
> > > > + }
> > > > +
> > > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > > +
> > > > + /* Make sure that the counter for registered threads does not
> > > > + * go out of sync. Hence, additional checks are required.
> > > > + */
> > > > + /* Check if the thread is already registered */
> > > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > + __ATOMIC_RELAXED);
> > > > + if (old_bmap & 1UL << id)
> > > > + return 0;
> > > > +
> > > > + do {
> > > > + new_bmap = old_bmap | (1UL << id);
> > > > + success = __atomic_compare_exchange(
> > > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > + &old_bmap, &new_bmap, 0,
> > > > + __ATOMIC_RELEASE,
> > > __ATOMIC_RELAXED);
> > > > +
> > > > + if (success)
> > > > + __atomic_fetch_add(&v->num_threads,
> > > > + 1, __ATOMIC_RELAXED);
> > > > + else if (old_bmap & (1UL << id))
> > > > + /* Someone else registered this thread.
> > > > + * Counter should not be incremented.
> > > > + */
> > > > + return 0;
> > > > + } while (success == 0);
> > >
> > > This would be simpler if threads were required to register themselves.
> > > Maybe you have use cases requiring registration of other threads,
> > > but this capability is adding significant complexity, so it might be
> > > worth some thought.
> > >
> > It was simple earlier (__atomic_fetch_or). The complexity is added as
> 'num_threads' should not go out of sync.
>
> Hmmm...
>
> So threads are allowed to register other threads? Or is there some other
> reason that concurrent registration is required?
>
Yes, control plane threads can register the fast path threads. Though, I am not sure how useful it is. I did not want to add the restriction. I expect that reader threads will register themselves. The reader threads require concurrent registration as they all will be running in parallel.
If the requirement of keeping track of the number of threads registered currently goes away, then this function will be simple.
<snip>
> > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > 000000000..ff696aeab
> > > > --- /dev/null
> > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > @@ -0,0 +1,554 @@
> > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > + * Copyright (c) 2018 Arm Limited */
> > > > +
> > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > +#define _RTE_RCU_QSBR_H_
> > > > +
> > > > +/**
> > > > + * @file
> > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > + *
> > > > + * Quiescent State (QS) is any point in the thread execution
> > > > + * where the thread does not hold a reference to a data structure
> > > > + * in shared memory. While using lock-less data structures, the
> > > > +writer
> > > > + * can safely free memory once all the reader threads have
> > > > +entered
> > > > + * quiescent state.
> > > > + *
> > > > + * This library provides the ability for the readers to report
> > > > +quiescent
> > > > + * state and for the writers to identify when all the readers
> > > > +have
> > > > + * entered quiescent state.
> > > > + */
> > > > +
> > > > +#ifdef __cplusplus
> > > > +extern "C" {
> > > > +#endif
> > > > +
> > > > +#include <stdio.h>
> > > > +#include <stdint.h>
> > > > +#include <errno.h>
> > > > +#include <rte_common.h>
> > > > +#include <rte_memory.h>
> > > > +#include <rte_lcore.h>
> > > > +#include <rte_debug.h>
> > > > +#include <rte_atomic.h>
> > > > +
> > > > +extern int rcu_log_type;
> > > > +
> > > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define
> RCU_DP_LOG(level,
> > > fmt,
> > > > +args...) \
> > > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > > +
> > > > +/* Registered thread IDs are stored as a bitmap of 64b element
> array.
> > > > + * Given thread id needs to be converted to index into the array
> > > > +and
> > > > + * the id within the array element.
> > > > + */
> > > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > > #define
> > > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > > RTE_CACHE_LINE_SIZE) #define
> > > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> > > > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define
> RTE_QSBR_THRID_MASK
> > > > +0x3f
> > > #define
> > > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > > +
> > > > +/* Worker thread counter */
> > > > +struct rte_rcu_qsbr_cnt {
> > > > + uint64_t cnt;
> > > > + /**< Quiescent state counter. Value 0 indicates the thread is
> > > > +offline */ } __rte_cache_aligned;
> > > > +
> > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define
> RTE_QSBR_CNT_INIT 1
> > > > +
> > > > +/* RTE Quiescent State variable structure.
> > > > + * This structure has two elements that vary in size based on the
> > > > + * 'max_threads' parameter.
> > > > + * 1) Quiescent state counter array
> > > > + * 2) Register thread ID array
> > > > + */
> > > > +struct rte_rcu_qsbr {
> > > > + uint64_t token __rte_cache_aligned;
> > > > + /**< Counter to allow for multiple concurrent quiescent state
> > > > +queries */
> > > > +
> > > > + uint32_t num_elems __rte_cache_aligned;
> > > > + /**< Number of elements in the thread ID array */
> > > > + uint32_t num_threads;
> > > > + /**< Number of threads currently using this QS variable */
> > > > + uint32_t max_threads;
> > > > + /**< Maximum number of threads using this QS variable */
> > > > +
> > > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > > + /**< Quiescent state counter array of 'max_threads' elements */
> > > > +
> > > > + /**< Registered thread IDs are stored in a bitmap array,
> > > > + * after the quiescent state counter array.
> > > > + */
> > > > +} __rte_cache_aligned;
> > > > +
<snip>
> > > > +
> > > > +/* Check the quiescent state counter for registered threads only,
> > > > +assuming
> > > > + * that not all threads have registered.
> > > > + */
> > > > +static __rte_always_inline int
> > > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t,
> > > > +bool
> > > > +wait) {
> > > > + uint32_t i, j, id;
> > > > + uint64_t bmap;
> > > > + uint64_t c;
> > > > + uint64_t *reg_thread_id;
> > > > +
> > > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > > > + i < v->num_elems;
> > > > + i++, reg_thread_id++) {
> > > > + /* Load the current registered thread bit map before
> > > > + * loading the reader thread quiescent state counters.
> > > > + */
> > > > + bmap = __atomic_load_n(reg_thread_id,
> > > __ATOMIC_ACQUIRE);
> > > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > > +
> > > > + while (bmap) {
> > > > + j = __builtin_ctzl(bmap);
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: check: token = %lu, wait = %d, Bit Map
> > > = 0x%lx, Thread ID = %d",
> > > > + __func__, t, wait, bmap, id + j);
> > > > + c = __atomic_load_n(
> > > > + &v->qsbr_cnt[id + j].cnt,
> > > > + __ATOMIC_ACQUIRE);
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: status: token = %lu, wait = %d, Thread
> > > QS cnt = %lu, Thread ID = %d",
> > > > + __func__, t, wait, c, id+j);
> > > > + /* Counter is not checked for wrap-around
> > > condition
> > > > + * as it is a 64b counter.
> > > > + */
> > > > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> > > < t)) {
> > >
> > > This assumes that a 64-bit counter won't overflow, which is close
> > > enough to true given current CPU clock frequencies. ;-)
> > >
> > > > + /* This thread is not in quiescent state */
> > > > + if (!wait)
> > > > + return 0;
> > > > +
> > > > + rte_pause();
> > > > + /* This thread might have unregistered.
> > > > + * Re-read the bitmap.
> > > > + */
> > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > + __ATOMIC_ACQUIRE);
> > > > +
> > > > + continue;
> > > > + }
> > > > +
> > > > + bmap &= ~(1UL << j);
> > > > + }
> > > > + }
> > > > +
> > > > + return 1;
> > > > +}
> > > > +
> > > > +/* Check the quiescent state counter for all threads, assuming
> > > > +that
> > > > + * all the threads have registered.
> > > > + */
> > > > +static __rte_always_inline int
> > > > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool
> > > > +wait)
> > >
> > > Does checking the bitmap really take long enough to make this
> > > worthwhile as a separate function? I would think that the
> > > bitmap-checking time would be lost in the noise of cache misses from
> the ->cnt loads.
> >
> > It avoids accessing one cache line. I think this is where the savings are
> (may be in theory). This is the most probable use case.
> > On the other hand, __rcu_qsbr_check_selective() will result in savings
> (depending on how many threads are currently registered) by avoiding
> accessing unwanted counters.
>
> Do you really expect to be calling this function on any kind of fastpath?
Yes. For some of the libraries (rte_hash), the writer is on the fast path.
>
> > > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in
> > > the absence of readers, you might see __rcu_qsbr_check_all() being a
> > > bit faster. But is that really what DPDK does?
> > I see improvements in the synthetic test case (similar to the one you
> have described, around 27%). However, in the more practical test cases I
> do not see any difference.
>
> If the performance improvement only occurs in a synthetic test case, does
> it really make sense to optimize for it?
I had to fix few issues in the performance test cases and added more to do the comparison. These changes are in v5.
There are 4 performance tests involving this API.
1) 1 Writer, 'N' readers
Writer: qsbr_start, qsbr_check(wait = true)
Readers: qsbr_quiescent
2) 'N' writers
Writers: qsbr_start, qsbr_check(wait == false)
3) 1 Writer, 'N' readers (this test uses the lock-free rte_hash data structure)
Writer: hash_del, qsbr_start, qsbr_check(wait = true), validate that the reader was able to complete its work successfully
Readers: thread_online, hash_lookup, access the pointer - do some work on it, qsbr_quiescent, thread_offline
4) Same as test 3) but qsbr_check (wait == false)
There are 2 sets of these tests.
a) QS variable is created with number of threads same as number of readers - this will exercise __rcu_qsbr_check_all
b) QS variable is created with 128 threads, number of registered threads is same as in a) - this will exercise __rcu_qsbr_check_selective
Following are the results on x86 (E5-2660 v4 @ 2.00GHz) comparing from a) to b) (on x86 in my setup, the results are not very stable between runs)
1) 25%
2) -3%
3) -0.4%
4) 1.38%
Following are the results on an Arm system comparing from a) to b) (results are not pretty stable between runs)
1) -3.45%
2) 0%
3) -0.03%
4) -0.04%
Konstantin, is it possible to run the tests on your setup and look at the results?
>
> > > > +{
> > > > + uint32_t i;
> > > > + struct rte_rcu_qsbr_cnt *cnt;
> > > > + uint64_t c;
> > > > +
> > > > + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> > > > + __func__, t, wait, i);
> > > > + while (1) {
> > > > + c = __atomic_load_n(&cnt->cnt,
> > > __ATOMIC_ACQUIRE);
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: status: token = %lu, wait = %d, Thread
> > > QS cnt = %lu, Thread ID = %d",
> > > > + __func__, t, wait, c, i);
> > > > + /* Counter is not checked for wrap-around
> > > condition
> > > > + * as it is a 64b counter.
> > > > + */
> > > > + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >=
> > > t))
> > > > + break;
> > > > +
> > > > + /* This thread is not in quiescent state */
> > > > + if (!wait)
> > > > + return 0;
> > > > +
> > > > + rte_pause();
> > > > + }
> > > > + }
> > > > +
> > > > + return 1;
> > > > +}
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > + *
> > > > + * Checks if all the reader threads have entered the quiescent
> > > > +state
> > > > + * referenced by token.
> > > > + *
> > > > + * This is implemented as a lock-free function. It is
> > > > +multi-thread
> > > > + * safe and can be called from the worker threads as well.
> > > > + *
> > > > + * If this API is called with 'wait' set to true, the following
> > > > + * factors must be considered:
> > > > + *
> > > > + * 1) If the calling thread is also reporting the status on the
> > > > + * same QS variable, it must update the quiescent state status,
> > > > +before
> > > > + * calling this API.
> > > > + *
> > > > + * 2) In addition, while calling from multiple threads, only
> > > > + * one of those threads can be reporting the quiescent state
> > > > +status
> > > > + * on a given QS variable.
> > > > + *
> > > > + * @param v
> > > > + * QS variable
> > > > + * @param t
> > > > + * Token returned by rte_rcu_qsbr_start API
> > > > + * @param wait
> > > > + * If true, block till all the reader threads have completed entering
> > > > + * the quiescent state referenced by token 't'.
> > > > + * @return
> > > > + * - 0 if all reader threads have NOT passed through specified
> number
> > > > + * of quiescent states.
> > > > + * - 1 if all reader threads have passed through specified number
> > > > + * of quiescent states.
> > > > + */
> > > > +static __rte_always_inline int __rte_experimental
> > > > +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait) {
> > > > + RTE_ASSERT(v != NULL);
> > > > +
> > > > + if (likely(v->num_threads == v->max_threads))
> > > > + return __rcu_qsbr_check_all(v, t, wait);
> > > > + else
> > > > + return __rcu_qsbr_check_selective(v, t, wait); }
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > + *
> > > > + * Wait till the reader threads have entered quiescent state.
> > > > + *
> > > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > > + * This API can be thought of as a wrapper around
> > > > +rte_rcu_qsbr_start and
> > > > + * rte_rcu_qsbr_check APIs.
> > > > + *
> > > > + * If this API is called from multiple threads, only one of
> > > > + * those threads can be reporting the quiescent state status on a
> > > > + * given QS variable.
> > > > + *
> > > > + * @param v
> > > > + * QS variable
> > > > + * @param thread_id
> > > > + * Thread ID of the caller if it is registered to report quiescent state
> > > > + * on this QS variable (i.e. the calling thread is also part of the
> > > > + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> > > > + */
> > > > +static __rte_always_inline void __rte_experimental
> > > > +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int
> > > > +thread_id) {
> > > > + uint64_t t;
> > > > +
> > > > + RTE_ASSERT(v != NULL);
> > > > +
> > > > + t = rte_rcu_qsbr_start(v);
> > > > +
> > > > + /* If the current thread has readside critical section,
> > > > + * update its quiescent state status.
> > > > + */
> > > > + if (thread_id != RTE_QSBR_THRID_INVALID)
> > > > + rte_rcu_qsbr_quiescent(v, thread_id);
> > > > +
> > > > + /* Wait for other readers to enter quiescent state */
> > > > + rte_rcu_qsbr_check(v, t, true);
> > >
> > > And you are presumably relying on 64-bit counters to avoid the need
> > > to execute the above code twice in succession. Which again works
> > > given current CPU clock rates combined with system and human
> lifespans.
> > > Otherwise, there are interesting race conditions that can happen, so
> > > don't try this with a 32-bit counter!!!
> >
> > Yes. I am relying on 64-bit counters to avoid having to spend cycles (and
> time).
> >
> > > (But think of the great^N grandchildren!!!)
> >
> > (It is an interesting thought. I wonder what would happen to all the
> > code we are writing today 😊)
>
> I suspect that most systems will be rebooted more than once per decade,
> so unless CPU core clock rates manage to go up another order of
> magnitude, we should be just fine.
>
> Famous last words! ;-)
>
> > > More seriously, a comment warning people not to make the counter
> be
> > > 32 bits is in order.
> > Agree, I will add it in the structure definition.
>
> Sounds good!
Done in V5
<snip>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 20:21 ` Honnappa Nagarahalli
@ 2019-04-12 20:21 ` Honnappa Nagarahalli
2019-04-15 16:51 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 20:21 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
<snip>
> > >
> > > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli
> wrote:
> > > > Add RCU library supporting quiescent state based memory
> > > > reclamation
> > > method.
> > > > This library helps identify the quiescent state of the reader
> > > > threads so that the writers can free the memory associated with
> > > > the lock less data structures.
> > >
> > > I don't see any sign of read-side markers (rcu_read_lock() and
> > > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> > >
> > > Yes, strictly speaking, these are not needed for QSBR to operate,
> > > but they
> > These APIs would be empty for QSBR.
> >
> > > make it way easier to maintain and debug code using RCU. For
> > > example, given the read-side markers, you can check for errors like
> > > having a call to
> > > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > > Without those read-side markers, life can be quite hard and you will
> > > really hate yourself for failing to have provided them.
> >
> > Want to make sure I understood this, do you mean the application
> would mark before and after accessing the shared data structure on the
> reader side?
> >
> > rte_rcu_qsbr_lock()
> > <begin access shared data structure>
> > ...
> > ...
> > <end access shared data structure>
> > rte_rcu_qsbr_unlock()
>
> Yes, that is the idea.
>
> > If someone is debugging this code, they have to make sure that there is
> an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in
> between.
> > It sounds good to me. Obviously, they will not add any additional cycles
> as well.
> > Please let me know if my understanding is correct.
>
> Yes. And in some sort of debug mode, you could capture the counter at
> rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time. If the
> counter has advanced too far (more than one, if I am not too confused)
> there is a bug. Also in debug mode, you could have rte_rcu_qsbr_lock()
> increment a per-thread counter and rte_rcu_qsbr_unlock() decrement it.
> If the counter is non-zero at a quiescent state, there is a bug.
> And so on.
>
Added this in V5
<snip>
> > > > +
> > > > +/* Get the memory size of QSBR variable */ size_t
> > > > +__rte_experimental rte_rcu_qsbr_get_memsize(uint32_t
> max_threads) {
> > > > + size_t sz;
> > > > +
> > > > + if (max_threads == 0) {
> > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > + "%s(): Invalid max_threads %u\n",
> > > > + __func__, max_threads);
> > > > + rte_errno = EINVAL;
> > > > +
> > > > + return 1;
> > > > + }
> > > > +
> > > > + sz = sizeof(struct rte_rcu_qsbr);
> > > > +
> > > > + /* Add the size of quiescent state counter array */
> > > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > > +
> > > > + /* Add the size of the registered thread ID bitmap array */
> > > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > > +
> > > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> > >
> > > Given that you align here, should you also align in the earlier
> > > steps in the computation of sz?
> >
> > Agree. I will remove the align here and keep the earlier one as the intent
> is to align the thread ID array.
>
> Sounds good!
Added this in V5
>
> > > > +}
> > > > +
> > > > +/* Initialize a quiescent state variable */ int
> > > > +__rte_experimental rte_rcu_qsbr_init(struct rte_rcu_qsbr *v,
> uint32_t max_threads) {
> > > > + size_t sz;
> > > > +
> > > > + if (v == NULL) {
> > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > + "%s(): Invalid input parameter\n", __func__);
> > > > + rte_errno = EINVAL;
> > > > +
> > > > + return 1;
> > > > + }
> > > > +
> > > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > > + if (sz == 1)
> > > > + return 1;
> > > > +
> > > > + /* Set all the threads to offline */
> > > > + memset(v, 0, sz);
> > >
> > > We calculate sz here, but it looks like the caller must also
> > > calculate it in order to correctly allocate the memory referenced by
> > > the "v" argument to this function, with bad things happening if the
> > > two calculations get different results. Should "v" instead be
> > > allocated within this function to avoid this sort of problem?
> >
> > Earlier version allocated the memory with-in this library. However, it was
> decided to go with the current implementation as it provides flexibility for
> the application to manage the memory as it sees fit. For ex: it could
> allocate this as part of another structure in a single allocation. This also
> falls inline with similar approach taken in other libraries.
>
> So the allocator APIs vary too much to allow a pointer to the desired
> allocator function to be passed in? Or do you also want to allow static
> allocation? If the latter, would a DEFINE_RTE_RCU_QSBR() be of use?
>
This is done to allow for allocation of memory for QS variable as part of a another bigger data structure. This will help in not fragmenting the memory. For ex:
struct xyz {
rte_ring *ring;
rte_rcu_qsbr *v;
abc *t;
};
struct xyz c;
Memory for the above structure can be allocated in one chunk by calculating the size required.
In some use cases static allocation might be enough as 'max_threads' might be a compile time constant. I am not sure on how to support both dynamic and static 'max_threads'.
> > > > + v->max_threads = max_threads;
> > > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > > + v->token = RTE_QSBR_CNT_INIT;
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > +/* Register a reader thread to report its quiescent state
> > > > + * on a QS variable.
> > > > + */
> > > > +int __rte_experimental
> > > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > > +thread_id) {
> > > > + unsigned int i, id, success;
> > > > + uint64_t old_bmap, new_bmap;
> > > > +
> > > > + if (v == NULL || thread_id >= v->max_threads) {
> > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > + "%s(): Invalid input parameter\n", __func__);
> > > > + rte_errno = EINVAL;
> > > > +
> > > > + return 1;
> > > > + }
> > > > +
> > > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > > +
> > > > + /* Make sure that the counter for registered threads does not
> > > > + * go out of sync. Hence, additional checks are required.
> > > > + */
> > > > + /* Check if the thread is already registered */
> > > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > + __ATOMIC_RELAXED);
> > > > + if (old_bmap & 1UL << id)
> > > > + return 0;
> > > > +
> > > > + do {
> > > > + new_bmap = old_bmap | (1UL << id);
> > > > + success = __atomic_compare_exchange(
> > > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > + &old_bmap, &new_bmap, 0,
> > > > + __ATOMIC_RELEASE,
> > > __ATOMIC_RELAXED);
> > > > +
> > > > + if (success)
> > > > + __atomic_fetch_add(&v->num_threads,
> > > > + 1, __ATOMIC_RELAXED);
> > > > + else if (old_bmap & (1UL << id))
> > > > + /* Someone else registered this thread.
> > > > + * Counter should not be incremented.
> > > > + */
> > > > + return 0;
> > > > + } while (success == 0);
> > >
> > > This would be simpler if threads were required to register themselves.
> > > Maybe you have use cases requiring registration of other threads,
> > > but this capability is adding significant complexity, so it might be
> > > worth some thought.
> > >
> > It was simple earlier (__atomic_fetch_or). The complexity is added as
> 'num_threads' should not go out of sync.
>
> Hmmm...
>
> So threads are allowed to register other threads? Or is there some other
> reason that concurrent registration is required?
>
Yes, control plane threads can register the fast path threads. Though, I am not sure how useful it is. I did not want to add the restriction. I expect that reader threads will register themselves. The reader threads require concurrent registration as they all will be running in parallel.
If the requirement of keeping track of the number of threads registered currently goes away, then this function will be simple.
<snip>
> > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > 000000000..ff696aeab
> > > > --- /dev/null
> > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > @@ -0,0 +1,554 @@
> > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > + * Copyright (c) 2018 Arm Limited */
> > > > +
> > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > +#define _RTE_RCU_QSBR_H_
> > > > +
> > > > +/**
> > > > + * @file
> > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > + *
> > > > + * Quiescent State (QS) is any point in the thread execution
> > > > + * where the thread does not hold a reference to a data structure
> > > > + * in shared memory. While using lock-less data structures, the
> > > > +writer
> > > > + * can safely free memory once all the reader threads have
> > > > +entered
> > > > + * quiescent state.
> > > > + *
> > > > + * This library provides the ability for the readers to report
> > > > +quiescent
> > > > + * state and for the writers to identify when all the readers
> > > > +have
> > > > + * entered quiescent state.
> > > > + */
> > > > +
> > > > +#ifdef __cplusplus
> > > > +extern "C" {
> > > > +#endif
> > > > +
> > > > +#include <stdio.h>
> > > > +#include <stdint.h>
> > > > +#include <errno.h>
> > > > +#include <rte_common.h>
> > > > +#include <rte_memory.h>
> > > > +#include <rte_lcore.h>
> > > > +#include <rte_debug.h>
> > > > +#include <rte_atomic.h>
> > > > +
> > > > +extern int rcu_log_type;
> > > > +
> > > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define
> RCU_DP_LOG(level,
> > > fmt,
> > > > +args...) \
> > > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > > +
> > > > +/* Registered thread IDs are stored as a bitmap of 64b element
> array.
> > > > + * Given thread id needs to be converted to index into the array
> > > > +and
> > > > + * the id within the array element.
> > > > + */
> > > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > > #define
> > > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > > RTE_CACHE_LINE_SIZE) #define
> > > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> > > > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define
> RTE_QSBR_THRID_MASK
> > > > +0x3f
> > > #define
> > > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > > +
> > > > +/* Worker thread counter */
> > > > +struct rte_rcu_qsbr_cnt {
> > > > + uint64_t cnt;
> > > > + /**< Quiescent state counter. Value 0 indicates the thread is
> > > > +offline */ } __rte_cache_aligned;
> > > > +
> > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define
> RTE_QSBR_CNT_INIT 1
> > > > +
> > > > +/* RTE Quiescent State variable structure.
> > > > + * This structure has two elements that vary in size based on the
> > > > + * 'max_threads' parameter.
> > > > + * 1) Quiescent state counter array
> > > > + * 2) Register thread ID array
> > > > + */
> > > > +struct rte_rcu_qsbr {
> > > > + uint64_t token __rte_cache_aligned;
> > > > + /**< Counter to allow for multiple concurrent quiescent state
> > > > +queries */
> > > > +
> > > > + uint32_t num_elems __rte_cache_aligned;
> > > > + /**< Number of elements in the thread ID array */
> > > > + uint32_t num_threads;
> > > > + /**< Number of threads currently using this QS variable */
> > > > + uint32_t max_threads;
> > > > + /**< Maximum number of threads using this QS variable */
> > > > +
> > > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > > + /**< Quiescent state counter array of 'max_threads' elements */
> > > > +
> > > > + /**< Registered thread IDs are stored in a bitmap array,
> > > > + * after the quiescent state counter array.
> > > > + */
> > > > +} __rte_cache_aligned;
> > > > +
<snip>
> > > > +
> > > > +/* Check the quiescent state counter for registered threads only,
> > > > +assuming
> > > > + * that not all threads have registered.
> > > > + */
> > > > +static __rte_always_inline int
> > > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t,
> > > > +bool
> > > > +wait) {
> > > > + uint32_t i, j, id;
> > > > + uint64_t bmap;
> > > > + uint64_t c;
> > > > + uint64_t *reg_thread_id;
> > > > +
> > > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > > > + i < v->num_elems;
> > > > + i++, reg_thread_id++) {
> > > > + /* Load the current registered thread bit map before
> > > > + * loading the reader thread quiescent state counters.
> > > > + */
> > > > + bmap = __atomic_load_n(reg_thread_id,
> > > __ATOMIC_ACQUIRE);
> > > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > > +
> > > > + while (bmap) {
> > > > + j = __builtin_ctzl(bmap);
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: check: token = %lu, wait = %d, Bit Map
> > > = 0x%lx, Thread ID = %d",
> > > > + __func__, t, wait, bmap, id + j);
> > > > + c = __atomic_load_n(
> > > > + &v->qsbr_cnt[id + j].cnt,
> > > > + __ATOMIC_ACQUIRE);
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: status: token = %lu, wait = %d, Thread
> > > QS cnt = %lu, Thread ID = %d",
> > > > + __func__, t, wait, c, id+j);
> > > > + /* Counter is not checked for wrap-around
> > > condition
> > > > + * as it is a 64b counter.
> > > > + */
> > > > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> > > < t)) {
> > >
> > > This assumes that a 64-bit counter won't overflow, which is close
> > > enough to true given current CPU clock frequencies. ;-)
> > >
> > > > + /* This thread is not in quiescent state */
> > > > + if (!wait)
> > > > + return 0;
> > > > +
> > > > + rte_pause();
> > > > + /* This thread might have unregistered.
> > > > + * Re-read the bitmap.
> > > > + */
> > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > + __ATOMIC_ACQUIRE);
> > > > +
> > > > + continue;
> > > > + }
> > > > +
> > > > + bmap &= ~(1UL << j);
> > > > + }
> > > > + }
> > > > +
> > > > + return 1;
> > > > +}
> > > > +
> > > > +/* Check the quiescent state counter for all threads, assuming
> > > > +that
> > > > + * all the threads have registered.
> > > > + */
> > > > +static __rte_always_inline int
> > > > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool
> > > > +wait)
> > >
> > > Does checking the bitmap really take long enough to make this
> > > worthwhile as a separate function? I would think that the
> > > bitmap-checking time would be lost in the noise of cache misses from
> the ->cnt loads.
> >
> > It avoids accessing one cache line. I think this is where the savings are
> (may be in theory). This is the most probable use case.
> > On the other hand, __rcu_qsbr_check_selective() will result in savings
> (depending on how many threads are currently registered) by avoiding
> accessing unwanted counters.
>
> Do you really expect to be calling this function on any kind of fastpath?
Yes. For some of the libraries (rte_hash), the writer is on the fast path.
>
> > > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in
> > > the absence of readers, you might see __rcu_qsbr_check_all() being a
> > > bit faster. But is that really what DPDK does?
> > I see improvements in the synthetic test case (similar to the one you
> have described, around 27%). However, in the more practical test cases I
> do not see any difference.
>
> If the performance improvement only occurs in a synthetic test case, does
> it really make sense to optimize for it?
I had to fix few issues in the performance test cases and added more to do the comparison. These changes are in v5.
There are 4 performance tests involving this API.
1) 1 Writer, 'N' readers
Writer: qsbr_start, qsbr_check(wait = true)
Readers: qsbr_quiescent
2) 'N' writers
Writers: qsbr_start, qsbr_check(wait == false)
3) 1 Writer, 'N' readers (this test uses the lock-free rte_hash data structure)
Writer: hash_del, qsbr_start, qsbr_check(wait = true), validate that the reader was able to complete its work successfully
Readers: thread_online, hash_lookup, access the pointer - do some work on it, qsbr_quiescent, thread_offline
4) Same as test 3) but qsbr_check (wait == false)
There are 2 sets of these tests.
a) QS variable is created with number of threads same as number of readers - this will exercise __rcu_qsbr_check_all
b) QS variable is created with 128 threads, number of registered threads is same as in a) - this will exercise __rcu_qsbr_check_selective
Following are the results on x86 (E5-2660 v4 @ 2.00GHz) comparing from a) to b) (on x86 in my setup, the results are not very stable between runs)
1) 25%
2) -3%
3) -0.4%
4) 1.38%
Following are the results on an Arm system comparing from a) to b) (results are not pretty stable between runs)
1) -3.45%
2) 0%
3) -0.03%
4) -0.04%
Konstantin, is it possible to run the tests on your setup and look at the results?
>
> > > > +{
> > > > + uint32_t i;
> > > > + struct rte_rcu_qsbr_cnt *cnt;
> > > > + uint64_t c;
> > > > +
> > > > + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> > > > + __func__, t, wait, i);
> > > > + while (1) {
> > > > + c = __atomic_load_n(&cnt->cnt,
> > > __ATOMIC_ACQUIRE);
> > > > + RCU_DP_LOG(DEBUG,
> > > > + "%s: status: token = %lu, wait = %d, Thread
> > > QS cnt = %lu, Thread ID = %d",
> > > > + __func__, t, wait, c, i);
> > > > + /* Counter is not checked for wrap-around
> > > condition
> > > > + * as it is a 64b counter.
> > > > + */
> > > > + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >=
> > > t))
> > > > + break;
> > > > +
> > > > + /* This thread is not in quiescent state */
> > > > + if (!wait)
> > > > + return 0;
> > > > +
> > > > + rte_pause();
> > > > + }
> > > > + }
> > > > +
> > > > + return 1;
> > > > +}
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > + *
> > > > + * Checks if all the reader threads have entered the quiescent
> > > > +state
> > > > + * referenced by token.
> > > > + *
> > > > + * This is implemented as a lock-free function. It is
> > > > +multi-thread
> > > > + * safe and can be called from the worker threads as well.
> > > > + *
> > > > + * If this API is called with 'wait' set to true, the following
> > > > + * factors must be considered:
> > > > + *
> > > > + * 1) If the calling thread is also reporting the status on the
> > > > + * same QS variable, it must update the quiescent state status,
> > > > +before
> > > > + * calling this API.
> > > > + *
> > > > + * 2) In addition, while calling from multiple threads, only
> > > > + * one of those threads can be reporting the quiescent state
> > > > +status
> > > > + * on a given QS variable.
> > > > + *
> > > > + * @param v
> > > > + * QS variable
> > > > + * @param t
> > > > + * Token returned by rte_rcu_qsbr_start API
> > > > + * @param wait
> > > > + * If true, block till all the reader threads have completed entering
> > > > + * the quiescent state referenced by token 't'.
> > > > + * @return
> > > > + * - 0 if all reader threads have NOT passed through specified
> number
> > > > + * of quiescent states.
> > > > + * - 1 if all reader threads have passed through specified number
> > > > + * of quiescent states.
> > > > + */
> > > > +static __rte_always_inline int __rte_experimental
> > > > +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait) {
> > > > + RTE_ASSERT(v != NULL);
> > > > +
> > > > + if (likely(v->num_threads == v->max_threads))
> > > > + return __rcu_qsbr_check_all(v, t, wait);
> > > > + else
> > > > + return __rcu_qsbr_check_selective(v, t, wait); }
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > + *
> > > > + * Wait till the reader threads have entered quiescent state.
> > > > + *
> > > > + * This is implemented as a lock-free function. It is multi-thread safe.
> > > > + * This API can be thought of as a wrapper around
> > > > +rte_rcu_qsbr_start and
> > > > + * rte_rcu_qsbr_check APIs.
> > > > + *
> > > > + * If this API is called from multiple threads, only one of
> > > > + * those threads can be reporting the quiescent state status on a
> > > > + * given QS variable.
> > > > + *
> > > > + * @param v
> > > > + * QS variable
> > > > + * @param thread_id
> > > > + * Thread ID of the caller if it is registered to report quiescent state
> > > > + * on this QS variable (i.e. the calling thread is also part of the
> > > > + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> > > > + */
> > > > +static __rte_always_inline void __rte_experimental
> > > > +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int
> > > > +thread_id) {
> > > > + uint64_t t;
> > > > +
> > > > + RTE_ASSERT(v != NULL);
> > > > +
> > > > + t = rte_rcu_qsbr_start(v);
> > > > +
> > > > + /* If the current thread has readside critical section,
> > > > + * update its quiescent state status.
> > > > + */
> > > > + if (thread_id != RTE_QSBR_THRID_INVALID)
> > > > + rte_rcu_qsbr_quiescent(v, thread_id);
> > > > +
> > > > + /* Wait for other readers to enter quiescent state */
> > > > + rte_rcu_qsbr_check(v, t, true);
> > >
> > > And you are presumably relying on 64-bit counters to avoid the need
> > > to execute the above code twice in succession. Which again works
> > > given current CPU clock rates combined with system and human
> lifespans.
> > > Otherwise, there are interesting race conditions that can happen, so
> > > don't try this with a 32-bit counter!!!
> >
> > Yes. I am relying on 64-bit counters to avoid having to spend cycles (and
> time).
> >
> > > (But think of the great^N grandchildren!!!)
> >
> > (It is an interesting thought. I wonder what would happen to all the
> > code we are writing today 😊)
>
> I suspect that most systems will be rebooted more than once per decade,
> so unless CPU core clock rates manage to go up another order of
> magnitude, we should be just fine.
>
> Famous last words! ;-)
>
> > > More seriously, a comment warning people not to make the counter
> be
> > > 32 bits is in order.
> > Agree, I will add it in the structure definition.
>
> Sounds good!
Done in V5
<snip>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 1/3] rcu: " Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
@ 2019-04-12 22:06 ` Stephen Hemminger
2019-04-12 22:06 ` Stephen Hemminger
2019-04-12 22:24 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-12 22:06 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Fri, 12 Apr 2019 15:20:37 -0500
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
After evaluating long term API/ABI issues, I think you need to get rid
of almost all use of inline and visible structures. Yes it might be marginally
slower, but you thank me the first time you have to fix something.
Even the log macro should be private.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 22:06 ` Stephen Hemminger
@ 2019-04-12 22:06 ` Stephen Hemminger
2019-04-12 22:24 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-12 22:06 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Fri, 12 Apr 2019 15:20:37 -0500
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
After evaluating long term API/ABI issues, I think you need to get rid
of almost all use of inline and visible structures. Yes it might be marginally
slower, but you thank me the first time you have to fix something.
Even the log macro should be private.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 22:06 ` Stephen Hemminger
2019-04-12 22:06 ` Stephen Hemminger
@ 2019-04-12 22:24 ` Honnappa Nagarahalli
2019-04-12 22:24 ` Honnappa Nagarahalli
2019-04-12 23:06 ` Stephen Hemminger
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 22:24 UTC (permalink / raw)
To: Stephen Hemminger
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> On Fri, 12 Apr 2019 15:20:37 -0500
> Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
>
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> After evaluating long term API/ABI issues, I think you need to get rid of almost
> all use of inline and visible structures. Yes it might be marginally slower, but
> you thank me the first time you have to fix something.
>
Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
The structure visibility definitely needs to be addressed.
For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> Even the log macro should be private.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 22:24 ` Honnappa Nagarahalli
@ 2019-04-12 22:24 ` Honnappa Nagarahalli
2019-04-12 23:06 ` Stephen Hemminger
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-12 22:24 UTC (permalink / raw)
To: Stephen Hemminger
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> On Fri, 12 Apr 2019 15:20:37 -0500
> Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
>
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> After evaluating long term API/ABI issues, I think you need to get rid of almost
> all use of inline and visible structures. Yes it might be marginally slower, but
> you thank me the first time you have to fix something.
>
Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
The structure visibility definitely needs to be addressed.
For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> Even the log macro should be private.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 22:24 ` Honnappa Nagarahalli
2019-04-12 22:24 ` Honnappa Nagarahalli
@ 2019-04-12 23:06 ` Stephen Hemminger
2019-04-12 23:06 ` Stephen Hemminger
2019-04-15 12:24 ` Ananyev, Konstantin
1 sibling, 2 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-12 23:06 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Fri, 12 Apr 2019 22:24:45 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > On Fri, 12 Apr 2019 15:20:37 -0500
> > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> >
> > > Add RCU library supporting quiescent state based memory reclamation
> > method.
> > > This library helps identify the quiescent state of the reader threads
> > > so that the writers can free the memory associated with the lock less
> > > data structures.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >
> > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > all use of inline and visible structures. Yes it might be marginally slower, but
> > you thank me the first time you have to fix something.
> >
> Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> The structure visibility definitely needs to be addressed.
> For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
Every function that is not in the direct datapath should not be inline.
Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 23:06 ` Stephen Hemminger
@ 2019-04-12 23:06 ` Stephen Hemminger
2019-04-15 12:24 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-12 23:06 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Fri, 12 Apr 2019 22:24:45 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > On Fri, 12 Apr 2019 15:20:37 -0500
> > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> >
> > > Add RCU library supporting quiescent state based memory reclamation
> > method.
> > > This library helps identify the quiescent state of the reader threads
> > > so that the writers can free the memory associated with the lock less
> > > data structures.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >
> > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > all use of inline and visible structures. Yes it might be marginally slower, but
> > you thank me the first time you have to fix something.
> >
> Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> The structure visibility definitely needs to be addressed.
> For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
Every function that is not in the direct datapath should not be inline.
Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 23:06 ` Stephen Hemminger
2019-04-12 23:06 ` Stephen Hemminger
@ 2019-04-15 12:24 ` Ananyev, Konstantin
2019-04-15 12:24 ` Ananyev, Konstantin
2019-04-15 15:38 ` Stephen Hemminger
1 sibling, 2 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 12:24 UTC (permalink / raw)
To: Stephen Hemminger, Honnappa Nagarahalli
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Saturday, April 13, 2019 12:06 AM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika Gupta
> <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
>
> On Fri, 12 Apr 2019 22:24:45 +0000
> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
>
> > >
> > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > >
> > > > Add RCU library supporting quiescent state based memory reclamation
> > > method.
> > > > This library helps identify the quiescent state of the reader threads
> > > > so that the writers can free the memory associated with the lock less
> > > > data structures.
> > > >
> > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > >
> > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > you thank me the first time you have to fix something.
> > >
> > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > The structure visibility definitely needs to be addressed.
> > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
>
> Every function that is not in the direct datapath should not be inline.
> Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
Plus synchronization routines: spin/rwlock/barrier, etc.
I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
(just a bit more sophisticated).
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 12:24 ` Ananyev, Konstantin
@ 2019-04-15 12:24 ` Ananyev, Konstantin
2019-04-15 15:38 ` Stephen Hemminger
1 sibling, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 12:24 UTC (permalink / raw)
To: Stephen Hemminger, Honnappa Nagarahalli
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Saturday, April 13, 2019 12:06 AM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika Gupta
> <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
>
> On Fri, 12 Apr 2019 22:24:45 +0000
> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
>
> > >
> > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > >
> > > > Add RCU library supporting quiescent state based memory reclamation
> > > method.
> > > > This library helps identify the quiescent state of the reader threads
> > > > so that the writers can free the memory associated with the lock less
> > > > data structures.
> > > >
> > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > >
> > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > you thank me the first time you have to fix something.
> > >
> > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > The structure visibility definitely needs to be addressed.
> > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
>
> Every function that is not in the direct datapath should not be inline.
> Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
Plus synchronization routines: spin/rwlock/barrier, etc.
I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
(just a bit more sophisticated).
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 12:24 ` Ananyev, Konstantin
2019-04-15 12:24 ` Ananyev, Konstantin
@ 2019-04-15 15:38 ` Stephen Hemminger
2019-04-15 15:38 ` Stephen Hemminger
2019-04-15 17:39 ` Ananyev, Konstantin
1 sibling, 2 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-15 15:38 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Honnappa Nagarahalli, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Mon, 15 Apr 2019 12:24:47 +0000
"Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Saturday, April 13, 2019 12:06 AM
> > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> > dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika Gupta
> > <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> >
> > On Fri, 12 Apr 2019 22:24:45 +0000
> > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > > >
> > > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > > >
> > > > > Add RCU library supporting quiescent state based memory reclamation
> > > > method.
> > > > > This library helps identify the quiescent state of the reader threads
> > > > > so that the writers can free the memory associated with the lock less
> > > > > data structures.
> > > > >
> > > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > >
> > > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > > you thank me the first time you have to fix something.
> > > >
> > > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > > The structure visibility definitely needs to be addressed.
> > > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> > difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> >
> > Every function that is not in the direct datapath should not be inline.
> > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
>
> Plus synchronization routines: spin/rwlock/barrier, etc.
> I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
> (just a bit more sophisticated).
> Konstantin
If you look at the other userspace RCU, you wil see that the only inlines
are the rcu_read_lock,rcu_read_unlock and rcu_reference/rcu_assign_pointer.
The synchronization logic is all real functions.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 15:38 ` Stephen Hemminger
@ 2019-04-15 15:38 ` Stephen Hemminger
2019-04-15 17:39 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-15 15:38 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Honnappa Nagarahalli, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Mon, 15 Apr 2019 12:24:47 +0000
"Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Saturday, April 13, 2019 12:06 AM
> > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> > dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika Gupta
> > <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> >
> > On Fri, 12 Apr 2019 22:24:45 +0000
> > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > > >
> > > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > > >
> > > > > Add RCU library supporting quiescent state based memory reclamation
> > > > method.
> > > > > This library helps identify the quiescent state of the reader threads
> > > > > so that the writers can free the memory associated with the lock less
> > > > > data structures.
> > > > >
> > > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > >
> > > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > > you thank me the first time you have to fix something.
> > > >
> > > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > > The structure visibility definitely needs to be addressed.
> > > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> > difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> >
> > Every function that is not in the direct datapath should not be inline.
> > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
>
> Plus synchronization routines: spin/rwlock/barrier, etc.
> I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
> (just a bit more sophisticated).
> Konstantin
If you look at the other userspace RCU, you wil see that the only inlines
are the rcu_read_lock,rcu_read_unlock and rcu_reference/rcu_assign_pointer.
The synchronization logic is all real functions.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-12 20:21 ` Honnappa Nagarahalli
2019-04-12 20:21 ` Honnappa Nagarahalli
@ 2019-04-15 16:51 ` Ananyev, Konstantin
2019-04-15 16:51 ` Ananyev, Konstantin
2019-04-15 19:46 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 16:51 UTC (permalink / raw)
To: Honnappa Nagarahalli, paulmck
Cc: stephen, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
> > > >
> > > > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli
> > wrote:
> > > > > Add RCU library supporting quiescent state based memory
> > > > > reclamation
> > > > method.
> > > > > This library helps identify the quiescent state of the reader
> > > > > threads so that the writers can free the memory associated with
> > > > > the lock less data structures.
> > > >
> > > > I don't see any sign of read-side markers (rcu_read_lock() and
> > > > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> > > >
> > > > Yes, strictly speaking, these are not needed for QSBR to operate,
> > > > but they
> > > These APIs would be empty for QSBR.
> > >
> > > > make it way easier to maintain and debug code using RCU. For
> > > > example, given the read-side markers, you can check for errors like
> > > > having a call to
> > > > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > > > Without those read-side markers, life can be quite hard and you will
> > > > really hate yourself for failing to have provided them.
> > >
> > > Want to make sure I understood this, do you mean the application
> > would mark before and after accessing the shared data structure on the
> > reader side?
> > >
> > > rte_rcu_qsbr_lock()
> > > <begin access shared data structure>
> > > ...
> > > ...
> > > <end access shared data structure>
> > > rte_rcu_qsbr_unlock()
> >
> > Yes, that is the idea.
> >
> > > If someone is debugging this code, they have to make sure that there is
> > an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in
> > between.
> > > It sounds good to me. Obviously, they will not add any additional cycles
> > as well.
> > > Please let me know if my understanding is correct.
> >
> > Yes. And in some sort of debug mode, you could capture the counter at
> > rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time. If the
> > counter has advanced too far (more than one, if I am not too confused)
> > there is a bug. Also in debug mode, you could have rte_rcu_qsbr_lock()
> > increment a per-thread counter and rte_rcu_qsbr_unlock() decrement it.
> > If the counter is non-zero at a quiescent state, there is a bug.
> > And so on.
> >
> Added this in V5
>
> <snip>
>
> > > > > +
> > > > > +/* Get the memory size of QSBR variable */ size_t
> > > > > +__rte_experimental rte_rcu_qsbr_get_memsize(uint32_t
> > max_threads) {
> > > > > + size_t sz;
> > > > > +
> > > > > + if (max_threads == 0) {
> > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > + "%s(): Invalid max_threads %u\n",
> > > > > + __func__, max_threads);
> > > > > + rte_errno = EINVAL;
> > > > > +
> > > > > + return 1;
> > > > > + }
> > > > > +
> > > > > + sz = sizeof(struct rte_rcu_qsbr);
> > > > > +
> > > > > + /* Add the size of quiescent state counter array */
> > > > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > > > +
> > > > > + /* Add the size of the registered thread ID bitmap array */
> > > > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > > > +
> > > > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> > > >
> > > > Given that you align here, should you also align in the earlier
> > > > steps in the computation of sz?
> > >
> > > Agree. I will remove the align here and keep the earlier one as the intent
> > is to align the thread ID array.
> >
> > Sounds good!
> Added this in V5
>
> >
> > > > > +}
> > > > > +
> > > > > +/* Initialize a quiescent state variable */ int
> > > > > +__rte_experimental rte_rcu_qsbr_init(struct rte_rcu_qsbr *v,
> > uint32_t max_threads) {
> > > > > + size_t sz;
> > > > > +
> > > > > + if (v == NULL) {
> > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > + rte_errno = EINVAL;
> > > > > +
> > > > > + return 1;
> > > > > + }
> > > > > +
> > > > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > > > + if (sz == 1)
> > > > > + return 1;
> > > > > +
> > > > > + /* Set all the threads to offline */
> > > > > + memset(v, 0, sz);
> > > >
> > > > We calculate sz here, but it looks like the caller must also
> > > > calculate it in order to correctly allocate the memory referenced by
> > > > the "v" argument to this function, with bad things happening if the
> > > > two calculations get different results. Should "v" instead be
> > > > allocated within this function to avoid this sort of problem?
> > >
> > > Earlier version allocated the memory with-in this library. However, it was
> > decided to go with the current implementation as it provides flexibility for
> > the application to manage the memory as it sees fit. For ex: it could
> > allocate this as part of another structure in a single allocation. This also
> > falls inline with similar approach taken in other libraries.
> >
> > So the allocator APIs vary too much to allow a pointer to the desired
> > allocator function to be passed in? Or do you also want to allow static
> > allocation? If the latter, would a DEFINE_RTE_RCU_QSBR() be of use?
> >
> This is done to allow for allocation of memory for QS variable as part of a another bigger data structure. This will help in not fragmenting
> the memory. For ex:
>
> struct xyz {
> rte_ring *ring;
> rte_rcu_qsbr *v;
> abc *t;
> };
> struct xyz c;
>
> Memory for the above structure can be allocated in one chunk by calculating the size required.
>
> In some use cases static allocation might be enough as 'max_threads' might be a compile time constant. I am not sure on how to support
> both dynamic and static 'max_threads'.
Same thought here- would be good to have a static initializer (DEFINE_RTE_RCU_QSBR),
but that means new compile time limit ('max_threads') - thing that we try to avoid.
>
> > > > > + v->max_threads = max_threads;
> > > > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > > > + v->token = RTE_QSBR_CNT_INIT;
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +/* Register a reader thread to report its quiescent state
> > > > > + * on a QS variable.
> > > > > + */
> > > > > +int __rte_experimental
> > > > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > > > +thread_id) {
> > > > > + unsigned int i, id, success;
> > > > > + uint64_t old_bmap, new_bmap;
> > > > > +
> > > > > + if (v == NULL || thread_id >= v->max_threads) {
> > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > + rte_errno = EINVAL;
> > > > > +
> > > > > + return 1;
> > > > > + }
> > > > > +
> > > > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > +
> > > > > + /* Make sure that the counter for registered threads does not
> > > > > + * go out of sync. Hence, additional checks are required.
> > > > > + */
> > > > > + /* Check if the thread is already registered */
> > > > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > + __ATOMIC_RELAXED);
> > > > > + if (old_bmap & 1UL << id)
> > > > > + return 0;
> > > > > +
> > > > > + do {
> > > > > + new_bmap = old_bmap | (1UL << id);
> > > > > + success = __atomic_compare_exchange(
> > > > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > + &old_bmap, &new_bmap, 0,
> > > > > + __ATOMIC_RELEASE,
> > > > __ATOMIC_RELAXED);
> > > > > +
> > > > > + if (success)
> > > > > + __atomic_fetch_add(&v->num_threads,
> > > > > + 1, __ATOMIC_RELAXED);
> > > > > + else if (old_bmap & (1UL << id))
> > > > > + /* Someone else registered this thread.
> > > > > + * Counter should not be incremented.
> > > > > + */
> > > > > + return 0;
> > > > > + } while (success == 0);
> > > >
> > > > This would be simpler if threads were required to register themselves.
> > > > Maybe you have use cases requiring registration of other threads,
> > > > but this capability is adding significant complexity, so it might be
> > > > worth some thought.
> > > >
> > > It was simple earlier (__atomic_fetch_or). The complexity is added as
> > 'num_threads' should not go out of sync.
> >
> > Hmmm...
> >
> > So threads are allowed to register other threads? Or is there some other
> > reason that concurrent registration is required?
> >
> Yes, control plane threads can register the fast path threads. Though, I am not sure how useful it is. I did not want to add the restriction. I
> expect that reader threads will register themselves. The reader threads require concurrent registration as they all will be running in parallel.
> If the requirement of keeping track of the number of threads registered currently goes away, then this function will be simple.
>
> <snip>
>
> > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > > 000000000..ff696aeab
> > > > > --- /dev/null
> > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > @@ -0,0 +1,554 @@
> > > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > > + * Copyright (c) 2018 Arm Limited */
> > > > > +
> > > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > > +#define _RTE_RCU_QSBR_H_
> > > > > +
> > > > > +/**
> > > > > + * @file
> > > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > > + *
> > > > > + * Quiescent State (QS) is any point in the thread execution
> > > > > + * where the thread does not hold a reference to a data structure
> > > > > + * in shared memory. While using lock-less data structures, the
> > > > > +writer
> > > > > + * can safely free memory once all the reader threads have
> > > > > +entered
> > > > > + * quiescent state.
> > > > > + *
> > > > > + * This library provides the ability for the readers to report
> > > > > +quiescent
> > > > > + * state and for the writers to identify when all the readers
> > > > > +have
> > > > > + * entered quiescent state.
> > > > > + */
> > > > > +
> > > > > +#ifdef __cplusplus
> > > > > +extern "C" {
> > > > > +#endif
> > > > > +
> > > > > +#include <stdio.h>
> > > > > +#include <stdint.h>
> > > > > +#include <errno.h>
> > > > > +#include <rte_common.h>
> > > > > +#include <rte_memory.h>
> > > > > +#include <rte_lcore.h>
> > > > > +#include <rte_debug.h>
> > > > > +#include <rte_atomic.h>
> > > > > +
> > > > > +extern int rcu_log_type;
> > > > > +
> > > > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define
> > RCU_DP_LOG(level,
> > > > fmt,
> > > > > +args...) \
> > > > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > > > +
> > > > > +/* Registered thread IDs are stored as a bitmap of 64b element
> > array.
> > > > > + * Given thread id needs to be converted to index into the array
> > > > > +and
> > > > > + * the id within the array element.
> > > > > + */
> > > > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > > > #define
> > > > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > > > RTE_CACHE_LINE_SIZE) #define
> > > > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> > > > > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define
> > RTE_QSBR_THRID_MASK
> > > > > +0x3f
> > > > #define
> > > > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > > > +
> > > > > +/* Worker thread counter */
> > > > > +struct rte_rcu_qsbr_cnt {
> > > > > + uint64_t cnt;
> > > > > + /**< Quiescent state counter. Value 0 indicates the thread is
> > > > > +offline */ } __rte_cache_aligned;
> > > > > +
> > > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define
> > RTE_QSBR_CNT_INIT 1
> > > > > +
> > > > > +/* RTE Quiescent State variable structure.
> > > > > + * This structure has two elements that vary in size based on the
> > > > > + * 'max_threads' parameter.
> > > > > + * 1) Quiescent state counter array
> > > > > + * 2) Register thread ID array
> > > > > + */
> > > > > +struct rte_rcu_qsbr {
> > > > > + uint64_t token __rte_cache_aligned;
> > > > > + /**< Counter to allow for multiple concurrent quiescent state
> > > > > +queries */
> > > > > +
> > > > > + uint32_t num_elems __rte_cache_aligned;
> > > > > + /**< Number of elements in the thread ID array */
> > > > > + uint32_t num_threads;
> > > > > + /**< Number of threads currently using this QS variable */
> > > > > + uint32_t max_threads;
> > > > > + /**< Maximum number of threads using this QS variable */
> > > > > +
> > > > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > > > + /**< Quiescent state counter array of 'max_threads' elements */
> > > > > +
> > > > > + /**< Registered thread IDs are stored in a bitmap array,
> > > > > + * after the quiescent state counter array.
> > > > > + */
> > > > > +} __rte_cache_aligned;
> > > > > +
>
> <snip>
>
> > > > > +
> > > > > +/* Check the quiescent state counter for registered threads only,
> > > > > +assuming
> > > > > + * that not all threads have registered.
> > > > > + */
> > > > > +static __rte_always_inline int
> > > > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t,
> > > > > +bool
> > > > > +wait) {
> > > > > + uint32_t i, j, id;
> > > > > + uint64_t bmap;
> > > > > + uint64_t c;
> > > > > + uint64_t *reg_thread_id;
> > > > > +
> > > > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > > > > + i < v->num_elems;
> > > > > + i++, reg_thread_id++) {
> > > > > + /* Load the current registered thread bit map before
> > > > > + * loading the reader thread quiescent state counters.
> > > > > + */
> > > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > __ATOMIC_ACQUIRE);
> > > > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > +
> > > > > + while (bmap) {
> > > > > + j = __builtin_ctzl(bmap);
> > > > > + RCU_DP_LOG(DEBUG,
> > > > > + "%s: check: token = %lu, wait = %d, Bit Map
> > > > = 0x%lx, Thread ID = %d",
> > > > > + __func__, t, wait, bmap, id + j);
> > > > > + c = __atomic_load_n(
> > > > > + &v->qsbr_cnt[id + j].cnt,
> > > > > + __ATOMIC_ACQUIRE);
> > > > > + RCU_DP_LOG(DEBUG,
> > > > > + "%s: status: token = %lu, wait = %d, Thread
> > > > QS cnt = %lu, Thread ID = %d",
> > > > > + __func__, t, wait, c, id+j);
> > > > > + /* Counter is not checked for wrap-around
> > > > condition
> > > > > + * as it is a 64b counter.
> > > > > + */
> > > > > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> > > > < t)) {
> > > >
> > > > This assumes that a 64-bit counter won't overflow, which is close
> > > > enough to true given current CPU clock frequencies. ;-)
> > > >
> > > > > + /* This thread is not in quiescent state */
> > > > > + if (!wait)
> > > > > + return 0;
> > > > > +
> > > > > + rte_pause();
> > > > > + /* This thread might have unregistered.
> > > > > + * Re-read the bitmap.
> > > > > + */
> > > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > > + __ATOMIC_ACQUIRE);
> > > > > +
> > > > > + continue;
> > > > > + }
> > > > > +
> > > > > + bmap &= ~(1UL << j);
> > > > > + }
> > > > > + }
> > > > > +
> > > > > + return 1;
> > > > > +}
> > > > > +
> > > > > +/* Check the quiescent state counter for all threads, assuming
> > > > > +that
> > > > > + * all the threads have registered.
> > > > > + */
> > > > > +static __rte_always_inline int
> > > > > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool
> > > > > +wait)
> > > >
> > > > Does checking the bitmap really take long enough to make this
> > > > worthwhile as a separate function? I would think that the
> > > > bitmap-checking time would be lost in the noise of cache misses from
> > the ->cnt loads.
> > >
> > > It avoids accessing one cache line. I think this is where the savings are
> > (may be in theory). This is the most probable use case.
> > > On the other hand, __rcu_qsbr_check_selective() will result in savings
> > (depending on how many threads are currently registered) by avoiding
> > accessing unwanted counters.
> >
> > Do you really expect to be calling this function on any kind of fastpath?
>
> Yes. For some of the libraries (rte_hash), the writer is on the fast path.
>
> >
> > > > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in
> > > > the absence of readers, you might see __rcu_qsbr_check_all() being a
> > > > bit faster. But is that really what DPDK does?
> > > I see improvements in the synthetic test case (similar to the one you
> > have described, around 27%). However, in the more practical test cases I
> > do not see any difference.
> >
> > If the performance improvement only occurs in a synthetic test case, does
> > it really make sense to optimize for it?
> I had to fix few issues in the performance test cases and added more to do the comparison. These changes are in v5.
> There are 4 performance tests involving this API.
> 1) 1 Writer, 'N' readers
> Writer: qsbr_start, qsbr_check(wait = true)
> Readers: qsbr_quiescent
> 2) 'N' writers
> Writers: qsbr_start, qsbr_check(wait == false)
> 3) 1 Writer, 'N' readers (this test uses the lock-free rte_hash data structure)
> Writer: hash_del, qsbr_start, qsbr_check(wait = true), validate that the reader was able to complete its work successfully
> Readers: thread_online, hash_lookup, access the pointer - do some work on it, qsbr_quiescent, thread_offline
> 4) Same as test 3) but qsbr_check (wait == false)
>
> There are 2 sets of these tests.
> a) QS variable is created with number of threads same as number of readers - this will exercise __rcu_qsbr_check_all
> b) QS variable is created with 128 threads, number of registered threads is same as in a) - this will exercise __rcu_qsbr_check_selective
>
> Following are the results on x86 (E5-2660 v4 @ 2.00GHz) comparing from a) to b) (on x86 in my setup, the results are not very stable
> between runs)
> 1) 25%
> 2) -3%
> 3) -0.4%
> 4) 1.38%
>
> Following are the results on an Arm system comparing from a) to b) (results are not pretty stable between runs)
> 1) -3.45%
> 2) 0%
> 3) -0.03%
> 4) -0.04%
>
> Konstantin, is it possible to run the tests on your setup and look at the results?
I did run V5 on my box (SKX 2.1 GHz) with 17 lcores (1 physical core per thread).
Didn't notice any siginifcatn fluctuations between runs, output below.
>rcu_qsbr_perf_autotesESC[0Kt
Number of cores provided = 17
Perf test with all reader threads registered
--------------------------------------------
Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 65707232899
Cycles per 1000 updates: 18482
Total RCU checks = 20000000
Cycles per 1000 checks: 3794991
Perf Test: 17 Readers
Total RCU updates = 1700000000
Cycles per 1000 updates: 2128
Perf test: 17 Writers ('wait' in qsbr_check == false)
Total RCU checks = 340000000
Cycles per 1000 checks: 10030
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 1984696
Cycles per 1 check(start, check): 2619002
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 2028030
Cycles per 1 check(start, check): 2876667
Perf test with some of reader threads registered
------------------------------------------------
Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 68850073055
Cycles per 1000 updates: 25490
Total RCU checks = 20000000
Cycles per 1000 checks: 5484403
Perf Test: 17 Readers
Total RCU updates = 1700000000
Cycles per 1000 updates: 2127
Perf test: 17 Writers ('wait' in qsbr_check == false)
Total RCU checks = 340000000
Cycles per 1000 checks: 10034
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 3604489
Cycles per 1 check(start, check): 7077372
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 3936831
Cycles per 1 check(start, check): 7262738
Test OK
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 16:51 ` Ananyev, Konstantin
@ 2019-04-15 16:51 ` Ananyev, Konstantin
2019-04-15 19:46 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 16:51 UTC (permalink / raw)
To: Honnappa Nagarahalli, paulmck
Cc: stephen, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
> > > >
> > > > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli
> > wrote:
> > > > > Add RCU library supporting quiescent state based memory
> > > > > reclamation
> > > > method.
> > > > > This library helps identify the quiescent state of the reader
> > > > > threads so that the writers can free the memory associated with
> > > > > the lock less data structures.
> > > >
> > > > I don't see any sign of read-side markers (rcu_read_lock() and
> > > > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> > > >
> > > > Yes, strictly speaking, these are not needed for QSBR to operate,
> > > > but they
> > > These APIs would be empty for QSBR.
> > >
> > > > make it way easier to maintain and debug code using RCU. For
> > > > example, given the read-side markers, you can check for errors like
> > > > having a call to
> > > > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > > > Without those read-side markers, life can be quite hard and you will
> > > > really hate yourself for failing to have provided them.
> > >
> > > Want to make sure I understood this, do you mean the application
> > would mark before and after accessing the shared data structure on the
> > reader side?
> > >
> > > rte_rcu_qsbr_lock()
> > > <begin access shared data structure>
> > > ...
> > > ...
> > > <end access shared data structure>
> > > rte_rcu_qsbr_unlock()
> >
> > Yes, that is the idea.
> >
> > > If someone is debugging this code, they have to make sure that there is
> > an unlock for every lock and there is no call to rte_rcu_qsbr_quiescent in
> > between.
> > > It sounds good to me. Obviously, they will not add any additional cycles
> > as well.
> > > Please let me know if my understanding is correct.
> >
> > Yes. And in some sort of debug mode, you could capture the counter at
> > rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time. If the
> > counter has advanced too far (more than one, if I am not too confused)
> > there is a bug. Also in debug mode, you could have rte_rcu_qsbr_lock()
> > increment a per-thread counter and rte_rcu_qsbr_unlock() decrement it.
> > If the counter is non-zero at a quiescent state, there is a bug.
> > And so on.
> >
> Added this in V5
>
> <snip>
>
> > > > > +
> > > > > +/* Get the memory size of QSBR variable */ size_t
> > > > > +__rte_experimental rte_rcu_qsbr_get_memsize(uint32_t
> > max_threads) {
> > > > > + size_t sz;
> > > > > +
> > > > > + if (max_threads == 0) {
> > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > + "%s(): Invalid max_threads %u\n",
> > > > > + __func__, max_threads);
> > > > > + rte_errno = EINVAL;
> > > > > +
> > > > > + return 1;
> > > > > + }
> > > > > +
> > > > > + sz = sizeof(struct rte_rcu_qsbr);
> > > > > +
> > > > > + /* Add the size of quiescent state counter array */
> > > > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > > > +
> > > > > + /* Add the size of the registered thread ID bitmap array */
> > > > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > > > +
> > > > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> > > >
> > > > Given that you align here, should you also align in the earlier
> > > > steps in the computation of sz?
> > >
> > > Agree. I will remove the align here and keep the earlier one as the intent
> > is to align the thread ID array.
> >
> > Sounds good!
> Added this in V5
>
> >
> > > > > +}
> > > > > +
> > > > > +/* Initialize a quiescent state variable */ int
> > > > > +__rte_experimental rte_rcu_qsbr_init(struct rte_rcu_qsbr *v,
> > uint32_t max_threads) {
> > > > > + size_t sz;
> > > > > +
> > > > > + if (v == NULL) {
> > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > + rte_errno = EINVAL;
> > > > > +
> > > > > + return 1;
> > > > > + }
> > > > > +
> > > > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > > > + if (sz == 1)
> > > > > + return 1;
> > > > > +
> > > > > + /* Set all the threads to offline */
> > > > > + memset(v, 0, sz);
> > > >
> > > > We calculate sz here, but it looks like the caller must also
> > > > calculate it in order to correctly allocate the memory referenced by
> > > > the "v" argument to this function, with bad things happening if the
> > > > two calculations get different results. Should "v" instead be
> > > > allocated within this function to avoid this sort of problem?
> > >
> > > Earlier version allocated the memory with-in this library. However, it was
> > decided to go with the current implementation as it provides flexibility for
> > the application to manage the memory as it sees fit. For ex: it could
> > allocate this as part of another structure in a single allocation. This also
> > falls inline with similar approach taken in other libraries.
> >
> > So the allocator APIs vary too much to allow a pointer to the desired
> > allocator function to be passed in? Or do you also want to allow static
> > allocation? If the latter, would a DEFINE_RTE_RCU_QSBR() be of use?
> >
> This is done to allow for allocation of memory for QS variable as part of a another bigger data structure. This will help in not fragmenting
> the memory. For ex:
>
> struct xyz {
> rte_ring *ring;
> rte_rcu_qsbr *v;
> abc *t;
> };
> struct xyz c;
>
> Memory for the above structure can be allocated in one chunk by calculating the size required.
>
> In some use cases static allocation might be enough as 'max_threads' might be a compile time constant. I am not sure on how to support
> both dynamic and static 'max_threads'.
Same thought here- would be good to have a static initializer (DEFINE_RTE_RCU_QSBR),
but that means new compile time limit ('max_threads') - thing that we try to avoid.
>
> > > > > + v->max_threads = max_threads;
> > > > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > > > + v->token = RTE_QSBR_CNT_INIT;
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +/* Register a reader thread to report its quiescent state
> > > > > + * on a QS variable.
> > > > > + */
> > > > > +int __rte_experimental
> > > > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int
> > > > > +thread_id) {
> > > > > + unsigned int i, id, success;
> > > > > + uint64_t old_bmap, new_bmap;
> > > > > +
> > > > > + if (v == NULL || thread_id >= v->max_threads) {
> > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > + rte_errno = EINVAL;
> > > > > +
> > > > > + return 1;
> > > > > + }
> > > > > +
> > > > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > +
> > > > > + /* Make sure that the counter for registered threads does not
> > > > > + * go out of sync. Hence, additional checks are required.
> > > > > + */
> > > > > + /* Check if the thread is already registered */
> > > > > + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > + __ATOMIC_RELAXED);
> > > > > + if (old_bmap & 1UL << id)
> > > > > + return 0;
> > > > > +
> > > > > + do {
> > > > > + new_bmap = old_bmap | (1UL << id);
> > > > > + success = __atomic_compare_exchange(
> > > > > + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > + &old_bmap, &new_bmap, 0,
> > > > > + __ATOMIC_RELEASE,
> > > > __ATOMIC_RELAXED);
> > > > > +
> > > > > + if (success)
> > > > > + __atomic_fetch_add(&v->num_threads,
> > > > > + 1, __ATOMIC_RELAXED);
> > > > > + else if (old_bmap & (1UL << id))
> > > > > + /* Someone else registered this thread.
> > > > > + * Counter should not be incremented.
> > > > > + */
> > > > > + return 0;
> > > > > + } while (success == 0);
> > > >
> > > > This would be simpler if threads were required to register themselves.
> > > > Maybe you have use cases requiring registration of other threads,
> > > > but this capability is adding significant complexity, so it might be
> > > > worth some thought.
> > > >
> > > It was simple earlier (__atomic_fetch_or). The complexity is added as
> > 'num_threads' should not go out of sync.
> >
> > Hmmm...
> >
> > So threads are allowed to register other threads? Or is there some other
> > reason that concurrent registration is required?
> >
> Yes, control plane threads can register the fast path threads. Though, I am not sure how useful it is. I did not want to add the restriction. I
> expect that reader threads will register themselves. The reader threads require concurrent registration as they all will be running in parallel.
> If the requirement of keeping track of the number of threads registered currently goes away, then this function will be simple.
>
> <snip>
>
> > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > > 000000000..ff696aeab
> > > > > --- /dev/null
> > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > @@ -0,0 +1,554 @@
> > > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > > + * Copyright (c) 2018 Arm Limited */
> > > > > +
> > > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > > +#define _RTE_RCU_QSBR_H_
> > > > > +
> > > > > +/**
> > > > > + * @file
> > > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > > + *
> > > > > + * Quiescent State (QS) is any point in the thread execution
> > > > > + * where the thread does not hold a reference to a data structure
> > > > > + * in shared memory. While using lock-less data structures, the
> > > > > +writer
> > > > > + * can safely free memory once all the reader threads have
> > > > > +entered
> > > > > + * quiescent state.
> > > > > + *
> > > > > + * This library provides the ability for the readers to report
> > > > > +quiescent
> > > > > + * state and for the writers to identify when all the readers
> > > > > +have
> > > > > + * entered quiescent state.
> > > > > + */
> > > > > +
> > > > > +#ifdef __cplusplus
> > > > > +extern "C" {
> > > > > +#endif
> > > > > +
> > > > > +#include <stdio.h>
> > > > > +#include <stdint.h>
> > > > > +#include <errno.h>
> > > > > +#include <rte_common.h>
> > > > > +#include <rte_memory.h>
> > > > > +#include <rte_lcore.h>
> > > > > +#include <rte_debug.h>
> > > > > +#include <rte_atomic.h>
> > > > > +
> > > > > +extern int rcu_log_type;
> > > > > +
> > > > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define
> > RCU_DP_LOG(level,
> > > > fmt,
> > > > > +args...) \
> > > > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > > > +
> > > > > +/* Registered thread IDs are stored as a bitmap of 64b element
> > array.
> > > > > + * Given thread id needs to be converted to index into the array
> > > > > +and
> > > > > + * the id within the array element.
> > > > > + */
> > > > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > > > #define
> > > > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > > > RTE_CACHE_LINE_SIZE) #define
> > > > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> > > > > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define
> > RTE_QSBR_THRID_MASK
> > > > > +0x3f
> > > > #define
> > > > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > > > +
> > > > > +/* Worker thread counter */
> > > > > +struct rte_rcu_qsbr_cnt {
> > > > > + uint64_t cnt;
> > > > > + /**< Quiescent state counter. Value 0 indicates the thread is
> > > > > +offline */ } __rte_cache_aligned;
> > > > > +
> > > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define
> > RTE_QSBR_CNT_INIT 1
> > > > > +
> > > > > +/* RTE Quiescent State variable structure.
> > > > > + * This structure has two elements that vary in size based on the
> > > > > + * 'max_threads' parameter.
> > > > > + * 1) Quiescent state counter array
> > > > > + * 2) Register thread ID array
> > > > > + */
> > > > > +struct rte_rcu_qsbr {
> > > > > + uint64_t token __rte_cache_aligned;
> > > > > + /**< Counter to allow for multiple concurrent quiescent state
> > > > > +queries */
> > > > > +
> > > > > + uint32_t num_elems __rte_cache_aligned;
> > > > > + /**< Number of elements in the thread ID array */
> > > > > + uint32_t num_threads;
> > > > > + /**< Number of threads currently using this QS variable */
> > > > > + uint32_t max_threads;
> > > > > + /**< Maximum number of threads using this QS variable */
> > > > > +
> > > > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > > > + /**< Quiescent state counter array of 'max_threads' elements */
> > > > > +
> > > > > + /**< Registered thread IDs are stored in a bitmap array,
> > > > > + * after the quiescent state counter array.
> > > > > + */
> > > > > +} __rte_cache_aligned;
> > > > > +
>
> <snip>
>
> > > > > +
> > > > > +/* Check the quiescent state counter for registered threads only,
> > > > > +assuming
> > > > > + * that not all threads have registered.
> > > > > + */
> > > > > +static __rte_always_inline int
> > > > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t,
> > > > > +bool
> > > > > +wait) {
> > > > > + uint32_t i, j, id;
> > > > > + uint64_t bmap;
> > > > > + uint64_t c;
> > > > > + uint64_t *reg_thread_id;
> > > > > +
> > > > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> > > > > + i < v->num_elems;
> > > > > + i++, reg_thread_id++) {
> > > > > + /* Load the current registered thread bit map before
> > > > > + * loading the reader thread quiescent state counters.
> > > > > + */
> > > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > __ATOMIC_ACQUIRE);
> > > > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > +
> > > > > + while (bmap) {
> > > > > + j = __builtin_ctzl(bmap);
> > > > > + RCU_DP_LOG(DEBUG,
> > > > > + "%s: check: token = %lu, wait = %d, Bit Map
> > > > = 0x%lx, Thread ID = %d",
> > > > > + __func__, t, wait, bmap, id + j);
> > > > > + c = __atomic_load_n(
> > > > > + &v->qsbr_cnt[id + j].cnt,
> > > > > + __ATOMIC_ACQUIRE);
> > > > > + RCU_DP_LOG(DEBUG,
> > > > > + "%s: status: token = %lu, wait = %d, Thread
> > > > QS cnt = %lu, Thread ID = %d",
> > > > > + __func__, t, wait, c, id+j);
> > > > > + /* Counter is not checked for wrap-around
> > > > condition
> > > > > + * as it is a 64b counter.
> > > > > + */
> > > > > + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c
> > > > < t)) {
> > > >
> > > > This assumes that a 64-bit counter won't overflow, which is close
> > > > enough to true given current CPU clock frequencies. ;-)
> > > >
> > > > > + /* This thread is not in quiescent state */
> > > > > + if (!wait)
> > > > > + return 0;
> > > > > +
> > > > > + rte_pause();
> > > > > + /* This thread might have unregistered.
> > > > > + * Re-read the bitmap.
> > > > > + */
> > > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > > + __ATOMIC_ACQUIRE);
> > > > > +
> > > > > + continue;
> > > > > + }
> > > > > +
> > > > > + bmap &= ~(1UL << j);
> > > > > + }
> > > > > + }
> > > > > +
> > > > > + return 1;
> > > > > +}
> > > > > +
> > > > > +/* Check the quiescent state counter for all threads, assuming
> > > > > +that
> > > > > + * all the threads have registered.
> > > > > + */
> > > > > +static __rte_always_inline int
> > > > > +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool
> > > > > +wait)
> > > >
> > > > Does checking the bitmap really take long enough to make this
> > > > worthwhile as a separate function? I would think that the
> > > > bitmap-checking time would be lost in the noise of cache misses from
> > the ->cnt loads.
> > >
> > > It avoids accessing one cache line. I think this is where the savings are
> > (may be in theory). This is the most probable use case.
> > > On the other hand, __rcu_qsbr_check_selective() will result in savings
> > (depending on how many threads are currently registered) by avoiding
> > accessing unwanted counters.
> >
> > Do you really expect to be calling this function on any kind of fastpath?
>
> Yes. For some of the libraries (rte_hash), the writer is on the fast path.
>
> >
> > > > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop in
> > > > the absence of readers, you might see __rcu_qsbr_check_all() being a
> > > > bit faster. But is that really what DPDK does?
> > > I see improvements in the synthetic test case (similar to the one you
> > have described, around 27%). However, in the more practical test cases I
> > do not see any difference.
> >
> > If the performance improvement only occurs in a synthetic test case, does
> > it really make sense to optimize for it?
> I had to fix few issues in the performance test cases and added more to do the comparison. These changes are in v5.
> There are 4 performance tests involving this API.
> 1) 1 Writer, 'N' readers
> Writer: qsbr_start, qsbr_check(wait = true)
> Readers: qsbr_quiescent
> 2) 'N' writers
> Writers: qsbr_start, qsbr_check(wait == false)
> 3) 1 Writer, 'N' readers (this test uses the lock-free rte_hash data structure)
> Writer: hash_del, qsbr_start, qsbr_check(wait = true), validate that the reader was able to complete its work successfully
> Readers: thread_online, hash_lookup, access the pointer - do some work on it, qsbr_quiescent, thread_offline
> 4) Same as test 3) but qsbr_check (wait == false)
>
> There are 2 sets of these tests.
> a) QS variable is created with number of threads same as number of readers - this will exercise __rcu_qsbr_check_all
> b) QS variable is created with 128 threads, number of registered threads is same as in a) - this will exercise __rcu_qsbr_check_selective
>
> Following are the results on x86 (E5-2660 v4 @ 2.00GHz) comparing from a) to b) (on x86 in my setup, the results are not very stable
> between runs)
> 1) 25%
> 2) -3%
> 3) -0.4%
> 4) 1.38%
>
> Following are the results on an Arm system comparing from a) to b) (results are not pretty stable between runs)
> 1) -3.45%
> 2) 0%
> 3) -0.03%
> 4) -0.04%
>
> Konstantin, is it possible to run the tests on your setup and look at the results?
I did run V5 on my box (SKX 2.1 GHz) with 17 lcores (1 physical core per thread).
Didn't notice any siginifcatn fluctuations between runs, output below.
>rcu_qsbr_perf_autotesESC[0Kt
Number of cores provided = 17
Perf test with all reader threads registered
--------------------------------------------
Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 65707232899
Cycles per 1000 updates: 18482
Total RCU checks = 20000000
Cycles per 1000 checks: 3794991
Perf Test: 17 Readers
Total RCU updates = 1700000000
Cycles per 1000 updates: 2128
Perf test: 17 Writers ('wait' in qsbr_check == false)
Total RCU checks = 340000000
Cycles per 1000 checks: 10030
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 1984696
Cycles per 1 check(start, check): 2619002
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 2028030
Cycles per 1 check(start, check): 2876667
Perf test with some of reader threads registered
------------------------------------------------
Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 68850073055
Cycles per 1000 updates: 25490
Total RCU checks = 20000000
Cycles per 1000 checks: 5484403
Perf Test: 17 Readers
Total RCU updates = 1700000000
Cycles per 1000 updates: 2127
Perf test: 17 Writers ('wait' in qsbr_check == false)
Total RCU checks = 340000000
Cycles per 1000 checks: 10034
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 3604489
Cycles per 1 check(start, check): 7077372
Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 3936831
Cycles per 1 check(start, check): 7262738
Test OK
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (3 preceding siblings ...)
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-15 17:29 ` Ananyev, Konstantin
2019-04-15 17:29 ` Ananyev, Konstantin
2019-04-16 5:10 ` Honnappa Nagarahalli
4 siblings, 2 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 17:29 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
Hi quys,
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:honnappa.nagarahalli@arm.com]
> Sent: Friday, April 12, 2019 9:21 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com; dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
>
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
> In the following paras, the term 'memory' refers to memory allocated
> by typical APIs like malloc or anything that is representative of
> memory, for ex: an index of a free element array.
>
> Since these data structures are lock less, the writers and readers
> are accessing the data structures concurrently. Hence, while removing
> an element from a data structure, the writers cannot return the memory
> to the allocator, without knowing that the readers are not
> referencing that element/memory anymore. Hence, it is required to
> separate the operation of removing an element into 2 steps:
>
> Delete: in this step, the writer removes the reference to the element from
> the data structure but does not return the associated memory to the
> allocator. This will ensure that new readers will not get a reference to
> the removed element. Removing the reference is an atomic operation.
>
> Free(Reclaim): in this step, the writer returns the memory to the
> memory allocator, only after knowing that all the readers have stopped
> referencing the deleted element.
>
> This library helps the writer determine when it is safe to free the
> memory.
>
> This library makes use of thread Quiescent State (QS). QS can be
> defined as 'any point in the thread execution where the thread does
> not hold a reference to shared memory'. It is upto the application to
> determine its quiescent state. Let us consider the following diagram:
>
> Time -------------------------------------------------->
>
> | |
> RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
> | |
> RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
> | |
> RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
> | |
> |<--->|
> Del | Free
> |
> Cannot free memory
> during this period
> (Grace Period)
>
> RTx - Reader thread
> < and > - Start and end of while(1) loop
> ***Dx*** - Reader thread is accessing the shared data structure Dx.
> i.e. critical section.
> +++ - Reader thread is not accessing any shared data structure.
> i.e. non critical section or quiescent state.
> Del - Point in time when the reference to the entry is removed using
> atomic operation.
> Free - Point in time when the writer can free the entry.
> Grace Period - Time duration between Del and Free, during which memory cannot
> be freed.
>
> As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
> accessing D2, if the writer has to remove an element from D2, the
> writer cannot free the memory associated with that element immediately.
> The writer can return the memory to the allocator only after the reader
> stops referencing D2. In other words, reader thread RT1 has to enter
> a quiescent state.
>
> Similarly, since thread RT3 is also accessing D2, writer has to wait till
> RT3 enters quiescent state as well.
>
> However, the writer does not need to wait for RT2 to enter quiescent state.
> Thread RT2 was not accessing D2 when the delete operation happened.
> So, RT2 will not get a reference to the deleted entry.
>
> It can be noted that, the critical sections for D2 and D3 are quiescent states
> for D1. i.e. for a given data structure Dx, any point in the thread execution
> that does not reference Dx is a quiescent state.
>
> Since memory is not freed immediately, there might be a need for
> provisioning of additional memory, depending on the application requirements.
>
> It is important to make sure that this library keeps the overhead of
> identifying the end of grace period and subsequent freeing of memory,
> to a minimum. The following paras explain how grace period and critical
> section affect this overhead.
>
> The writer has to poll the readers to identify the end of grace period.
> Polling introduces memory accesses and wastes CPU cycles. The memory
> is not available for reuse during grace period. Longer grace periods
> exasperate these conditions.
>
> The length of the critical section and the number of reader threads
> is proportional to the duration of the grace period. Keeping the critical
> sections smaller will keep the grace period smaller. However, keeping the
> critical sections smaller requires additional CPU cycles(due to additional
> reporting) in the readers.
>
> Hence, we need the characteristics of small grace period and large critical
> section. This library addresses this by allowing the writer to do
> other work without having to block till the readers report their quiescent
> state.
>
> For DPDK applications, the start and end of while(1) loop (where no
> references to shared data structures are kept) act as perfect quiescent
> states. This will combine all the shared data structure accesses into a
> single, large critical section which helps keep the overhead on the
> reader side to a minimum.
>
> DPDK supports pipeline model of packet processing and service cores.
> In these use cases, a given data structure may not be used by all the
> workers in the application. The writer does not have to wait for all
> the workers to report their quiescent state. To provide the required
> flexibility, this library has a concept of QS variable. The application
> can create one QS variable per data structure to help it track the
> end of grace period for each data structure. This helps keep the grace
> period to a minimum.
>
> The application has to allocate memory and initialize a QS variable.
>
> Application can call rte_rcu_qsbr_get_memsize to calculate the size
> of memory to allocate. This API takes maximum number of reader threads,
> using this variable, as a parameter. Currently, a maximum of 1024 threads
> are supported.
>
> Further, the application can initialize a QS variable using the API
> rte_rcu_qsbr_init.
>
> Each reader thread is assumed to have a unique thread ID. Currently, the
> management of the thread ID (for ex: allocation/free) is left to the
> application. The thread ID should be in the range of 0 to
> maximum number of threads provided while creating the QS variable.
> The application could also use lcore_id as the thread ID where applicable.
>
> rte_rcu_qsbr_thread_register API will register a reader thread
> to report its quiescent state. This can be called from a reader thread.
> A control plane thread can also call this on behalf of a reader thread.
> The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
> its quiescent state.
>
> Some of the use cases might require the reader threads to make
> blocking API calls (for ex: while using eventdev APIs). The writer thread
> should not wait for such reader threads to enter quiescent state.
> The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
> blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
> API call returns.
>
> The writer thread can trigger the reader threads to report their quiescent
> state by calling the API rte_rcu_qsbr_start. It is possible for multiple
> writer threads to query the quiescent state status simultaneously. Hence,
> rte_rcu_qsbr_start returns a token to each caller.
>
> The writer thread has to call rte_rcu_qsbr_check API with the token to get the
> current quiescent state status. Option to block till all the reader threads
> enter the quiescent state is provided. If this API indicates that all the
> reader threads have entered the quiescent state, the application can free the
> deleted entry.
>
> The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
> can be called concurrently from multiple writers even while running
> as worker threads.
>
> The separation of triggering the reporting from querying the status provides
> the writer threads flexibility to do useful work instead of blocking for the
> reader threads to enter the quiescent state or go offline. This reduces the
> memory accesses due to continuous polling for the status.
>
> rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
> and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
> threads to report their quiescent state and polls till all the readers enter
> the quiescent state or go offline. This API does not allow the writer to
> do useful work while waiting and also introduces additional memory accesses
> due to continuous polling.
>
> The reader thread must call rte_rcu_qsbr_thread_offline and
> rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
> quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
> thread to report the quiescent state status anymore.
>
> The reader threads should call rte_rcu_qsbr_update API to indicate that they
> entered a quiescent state. This API checks if a writer has triggered a
> quiescent state query and update the state accordingly.
>
> Patch v5:
> 1) Library changes
> a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
> c) Clarified the need for 64b counters (Paul)
> 2) Test cases
> a) Added additional performance test cases to benchmark
> __rcu_qsbr_check_all
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
> 3) Documentation
> a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
>
> Patch v4:
> 1) Library changes
> a) Fixed the compilation issue on x86 (Konstantin)
> b) Rebased with latest master
>
> Patch v3:
> 1) Library changes
> a) Moved the registered thread ID array to the end of the
> structure (Konstantin)
> b) Removed the compile time constant RTE_RCU_MAX_THREADS
> c) Added code to keep track of registered number of threads
>
> Patch v2:
> 1) Library changes
> a) Corrected the RTE_ASSERT checks (Konstantin)
> b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
> c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
> d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
> e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
> f) Removed the macro to access the thread QS counters (Konstantin)
> 2) Test cases
> a) Added additional test cases for removing RTE_ASSERT
> 3) Documentation
> a) Changed the figure to make it bigger (Marko)
> b) Spelling and format corrections (Marko)
>
> Patch v1:
> 1) Library changes
> a) Changed the maximum number of reader threads to 1024
> b) Renamed rte_rcu_qsbr_register/unregister_thread to
> rte_rcu_qsbr_thread_register/unregister
> c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
> version of rte_rcu_qsbr_thread_register/unregister API. These
> also provide the flexibility for performance when the requested
> maximum number of threads is higher than the current number of
> threads.
> d) Corrected memory orderings in rte_rcu_qsbr_update
> e) Changed the signature of rte_rcu_qsbr_start API to return the token
> f) Changed the signature of rte_rcu_qsbr_start API to not take the
> expected number of QS states to wait.
> g) Added debug logs
> h) Added API and programmer guide documentation.
>
> RFC v3:
> 1) Library changes
> a) Rebased to latest master
> b) Added new API rte_rcu_qsbr_get_memsize
> c) Add support for memory allocation for QSBR variable (Konstantin)
> d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
> 2) Testcase changes
> a) Separated stress tests into a performance test case file
> b) Added performance statistics
>
> RFC v2:
> 1) Cover letter changes
> a) Explian the parameters that affect the overhead of using RCU
> and their effect
> b) Explain how this library addresses these effects to keep
> the overhead to minimum
> 2) Library changes
> a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
> b) Simplify the code/remove APIs to keep this library inline with
> other synchronisation mechanisms like locks (Konstantin)
> c) Change the design to support more than 64 threads (Konstantin)
> d) Fixed version map to remove static inline functions
> 3) Testcase changes
> a) Add boundary and additional functional test cases
> b) Add stress test cases (Paul E. McKenney)
>
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (2):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
>
> MAINTAINERS | 5 +
> app/test/Makefile | 2 +
> app/test/autotest_data.py | 12 +
> app/test/meson.build | 5 +
> app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
> app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
> config/common_base | 6 +
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf.in | 1 +
> .../prog_guide/img/rcu_general_info.svg | 509 +++++++++
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/rcu_lib.rst | 185 +++
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 +
> lib/librte_rcu/meson.build | 5 +
> lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
> lib/librte_rcu/rte_rcu_qsbr.h | 645 +++++++++++
> lib/librte_rcu/rte_rcu_version.map | 11 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 20 files changed, 3370 insertions(+), 2 deletions(-)
> create mode 100644 app/test/test_rcu_qsbr.c
> create mode 100644 app/test/test_rcu_qsbr_perf.c
> create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
> create mode 100644 doc/guides/prog_guide/rcu_lib.rst
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> --
> 2.17.1
Just to let you know - observed some failures with it for meson.
Fixed it locally by:
diff --git a/app/test/meson.build b/app/test/meson.build
index 1a2ee18a5..e3e566bce 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -138,7 +138,7 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
'rcu'
]
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
index c009ae4b7..0c2d5a2e0 100644
--- a/lib/librte_rcu/meson.build
+++ b/lib/librte_rcu/meson.build
@@ -1,5 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Arm Limited
+allow_experimental_apis = true
+
sources = files('rte_rcu_qsbr.c')
headers = files('rte_rcu_qsbr.h')
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-15 17:29 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
@ 2019-04-15 17:29 ` Ananyev, Konstantin
2019-04-16 5:10 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 17:29 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
Hi quys,
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:honnappa.nagarahalli@arm.com]
> Sent: Friday, April 12, 2019 9:21 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com; dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
>
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
> In the following paras, the term 'memory' refers to memory allocated
> by typical APIs like malloc or anything that is representative of
> memory, for ex: an index of a free element array.
>
> Since these data structures are lock less, the writers and readers
> are accessing the data structures concurrently. Hence, while removing
> an element from a data structure, the writers cannot return the memory
> to the allocator, without knowing that the readers are not
> referencing that element/memory anymore. Hence, it is required to
> separate the operation of removing an element into 2 steps:
>
> Delete: in this step, the writer removes the reference to the element from
> the data structure but does not return the associated memory to the
> allocator. This will ensure that new readers will not get a reference to
> the removed element. Removing the reference is an atomic operation.
>
> Free(Reclaim): in this step, the writer returns the memory to the
> memory allocator, only after knowing that all the readers have stopped
> referencing the deleted element.
>
> This library helps the writer determine when it is safe to free the
> memory.
>
> This library makes use of thread Quiescent State (QS). QS can be
> defined as 'any point in the thread execution where the thread does
> not hold a reference to shared memory'. It is upto the application to
> determine its quiescent state. Let us consider the following diagram:
>
> Time -------------------------------------------------->
>
> | |
> RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
> | |
> RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
> | |
> RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
> | |
> |<--->|
> Del | Free
> |
> Cannot free memory
> during this period
> (Grace Period)
>
> RTx - Reader thread
> < and > - Start and end of while(1) loop
> ***Dx*** - Reader thread is accessing the shared data structure Dx.
> i.e. critical section.
> +++ - Reader thread is not accessing any shared data structure.
> i.e. non critical section or quiescent state.
> Del - Point in time when the reference to the entry is removed using
> atomic operation.
> Free - Point in time when the writer can free the entry.
> Grace Period - Time duration between Del and Free, during which memory cannot
> be freed.
>
> As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
> accessing D2, if the writer has to remove an element from D2, the
> writer cannot free the memory associated with that element immediately.
> The writer can return the memory to the allocator only after the reader
> stops referencing D2. In other words, reader thread RT1 has to enter
> a quiescent state.
>
> Similarly, since thread RT3 is also accessing D2, writer has to wait till
> RT3 enters quiescent state as well.
>
> However, the writer does not need to wait for RT2 to enter quiescent state.
> Thread RT2 was not accessing D2 when the delete operation happened.
> So, RT2 will not get a reference to the deleted entry.
>
> It can be noted that, the critical sections for D2 and D3 are quiescent states
> for D1. i.e. for a given data structure Dx, any point in the thread execution
> that does not reference Dx is a quiescent state.
>
> Since memory is not freed immediately, there might be a need for
> provisioning of additional memory, depending on the application requirements.
>
> It is important to make sure that this library keeps the overhead of
> identifying the end of grace period and subsequent freeing of memory,
> to a minimum. The following paras explain how grace period and critical
> section affect this overhead.
>
> The writer has to poll the readers to identify the end of grace period.
> Polling introduces memory accesses and wastes CPU cycles. The memory
> is not available for reuse during grace period. Longer grace periods
> exasperate these conditions.
>
> The length of the critical section and the number of reader threads
> is proportional to the duration of the grace period. Keeping the critical
> sections smaller will keep the grace period smaller. However, keeping the
> critical sections smaller requires additional CPU cycles(due to additional
> reporting) in the readers.
>
> Hence, we need the characteristics of small grace period and large critical
> section. This library addresses this by allowing the writer to do
> other work without having to block till the readers report their quiescent
> state.
>
> For DPDK applications, the start and end of while(1) loop (where no
> references to shared data structures are kept) act as perfect quiescent
> states. This will combine all the shared data structure accesses into a
> single, large critical section which helps keep the overhead on the
> reader side to a minimum.
>
> DPDK supports pipeline model of packet processing and service cores.
> In these use cases, a given data structure may not be used by all the
> workers in the application. The writer does not have to wait for all
> the workers to report their quiescent state. To provide the required
> flexibility, this library has a concept of QS variable. The application
> can create one QS variable per data structure to help it track the
> end of grace period for each data structure. This helps keep the grace
> period to a minimum.
>
> The application has to allocate memory and initialize a QS variable.
>
> Application can call rte_rcu_qsbr_get_memsize to calculate the size
> of memory to allocate. This API takes maximum number of reader threads,
> using this variable, as a parameter. Currently, a maximum of 1024 threads
> are supported.
>
> Further, the application can initialize a QS variable using the API
> rte_rcu_qsbr_init.
>
> Each reader thread is assumed to have a unique thread ID. Currently, the
> management of the thread ID (for ex: allocation/free) is left to the
> application. The thread ID should be in the range of 0 to
> maximum number of threads provided while creating the QS variable.
> The application could also use lcore_id as the thread ID where applicable.
>
> rte_rcu_qsbr_thread_register API will register a reader thread
> to report its quiescent state. This can be called from a reader thread.
> A control plane thread can also call this on behalf of a reader thread.
> The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
> its quiescent state.
>
> Some of the use cases might require the reader threads to make
> blocking API calls (for ex: while using eventdev APIs). The writer thread
> should not wait for such reader threads to enter quiescent state.
> The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
> blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
> API call returns.
>
> The writer thread can trigger the reader threads to report their quiescent
> state by calling the API rte_rcu_qsbr_start. It is possible for multiple
> writer threads to query the quiescent state status simultaneously. Hence,
> rte_rcu_qsbr_start returns a token to each caller.
>
> The writer thread has to call rte_rcu_qsbr_check API with the token to get the
> current quiescent state status. Option to block till all the reader threads
> enter the quiescent state is provided. If this API indicates that all the
> reader threads have entered the quiescent state, the application can free the
> deleted entry.
>
> The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
> can be called concurrently from multiple writers even while running
> as worker threads.
>
> The separation of triggering the reporting from querying the status provides
> the writer threads flexibility to do useful work instead of blocking for the
> reader threads to enter the quiescent state or go offline. This reduces the
> memory accesses due to continuous polling for the status.
>
> rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
> and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
> threads to report their quiescent state and polls till all the readers enter
> the quiescent state or go offline. This API does not allow the writer to
> do useful work while waiting and also introduces additional memory accesses
> due to continuous polling.
>
> The reader thread must call rte_rcu_qsbr_thread_offline and
> rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
> quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
> thread to report the quiescent state status anymore.
>
> The reader threads should call rte_rcu_qsbr_update API to indicate that they
> entered a quiescent state. This API checks if a writer has triggered a
> quiescent state query and update the state accordingly.
>
> Patch v5:
> 1) Library changes
> a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
> c) Clarified the need for 64b counters (Paul)
> 2) Test cases
> a) Added additional performance test cases to benchmark
> __rcu_qsbr_check_all
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
> 3) Documentation
> a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
>
> Patch v4:
> 1) Library changes
> a) Fixed the compilation issue on x86 (Konstantin)
> b) Rebased with latest master
>
> Patch v3:
> 1) Library changes
> a) Moved the registered thread ID array to the end of the
> structure (Konstantin)
> b) Removed the compile time constant RTE_RCU_MAX_THREADS
> c) Added code to keep track of registered number of threads
>
> Patch v2:
> 1) Library changes
> a) Corrected the RTE_ASSERT checks (Konstantin)
> b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
> c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
> d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
> e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
> f) Removed the macro to access the thread QS counters (Konstantin)
> 2) Test cases
> a) Added additional test cases for removing RTE_ASSERT
> 3) Documentation
> a) Changed the figure to make it bigger (Marko)
> b) Spelling and format corrections (Marko)
>
> Patch v1:
> 1) Library changes
> a) Changed the maximum number of reader threads to 1024
> b) Renamed rte_rcu_qsbr_register/unregister_thread to
> rte_rcu_qsbr_thread_register/unregister
> c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
> version of rte_rcu_qsbr_thread_register/unregister API. These
> also provide the flexibility for performance when the requested
> maximum number of threads is higher than the current number of
> threads.
> d) Corrected memory orderings in rte_rcu_qsbr_update
> e) Changed the signature of rte_rcu_qsbr_start API to return the token
> f) Changed the signature of rte_rcu_qsbr_start API to not take the
> expected number of QS states to wait.
> g) Added debug logs
> h) Added API and programmer guide documentation.
>
> RFC v3:
> 1) Library changes
> a) Rebased to latest master
> b) Added new API rte_rcu_qsbr_get_memsize
> c) Add support for memory allocation for QSBR variable (Konstantin)
> d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
> 2) Testcase changes
> a) Separated stress tests into a performance test case file
> b) Added performance statistics
>
> RFC v2:
> 1) Cover letter changes
> a) Explian the parameters that affect the overhead of using RCU
> and their effect
> b) Explain how this library addresses these effects to keep
> the overhead to minimum
> 2) Library changes
> a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
> b) Simplify the code/remove APIs to keep this library inline with
> other synchronisation mechanisms like locks (Konstantin)
> c) Change the design to support more than 64 threads (Konstantin)
> d) Fixed version map to remove static inline functions
> 3) Testcase changes
> a) Add boundary and additional functional test cases
> b) Add stress test cases (Paul E. McKenney)
>
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (2):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
>
> MAINTAINERS | 5 +
> app/test/Makefile | 2 +
> app/test/autotest_data.py | 12 +
> app/test/meson.build | 5 +
> app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
> app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
> config/common_base | 6 +
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf.in | 1 +
> .../prog_guide/img/rcu_general_info.svg | 509 +++++++++
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/rcu_lib.rst | 185 +++
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 +
> lib/librte_rcu/meson.build | 5 +
> lib/librte_rcu/rte_rcu_qsbr.c | 237 ++++
> lib/librte_rcu/rte_rcu_qsbr.h | 645 +++++++++++
> lib/librte_rcu/rte_rcu_version.map | 11 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 20 files changed, 3370 insertions(+), 2 deletions(-)
> create mode 100644 app/test/test_rcu_qsbr.c
> create mode 100644 app/test/test_rcu_qsbr_perf.c
> create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
> create mode 100644 doc/guides/prog_guide/rcu_lib.rst
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> --
> 2.17.1
Just to let you know - observed some failures with it for meson.
Fixed it locally by:
diff --git a/app/test/meson.build b/app/test/meson.build
index 1a2ee18a5..e3e566bce 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -138,7 +138,7 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
'rcu'
]
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
index c009ae4b7..0c2d5a2e0 100644
--- a/lib/librte_rcu/meson.build
+++ b/lib/librte_rcu/meson.build
@@ -1,5 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Arm Limited
+allow_experimental_apis = true
+
sources = files('rte_rcu_qsbr.c')
headers = files('rte_rcu_qsbr.h')
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 15:38 ` Stephen Hemminger
2019-04-15 15:38 ` Stephen Hemminger
@ 2019-04-15 17:39 ` Ananyev, Konstantin
2019-04-15 17:39 ` Ananyev, Konstantin
` (2 more replies)
1 sibling, 3 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 17:39 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Honnappa Nagarahalli, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Monday, April 15, 2019 4:39 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
>
> On Mon, 15 Apr 2019 12:24:47 +0000
> "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
>
> > > -----Original Message-----
> > > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > > Sent: Saturday, April 13, 2019 12:06 AM
> > > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> > > dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika
> Gupta
> > > <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> > >
> > > On Fri, 12 Apr 2019 22:24:45 +0000
> > > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > >
> > > > >
> > > > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > > > >
> > > > > > Add RCU library supporting quiescent state based memory reclamation
> > > > > method.
> > > > > > This library helps identify the quiescent state of the reader threads
> > > > > > so that the writers can free the memory associated with the lock less
> > > > > > data structures.
> > > > > >
> > > > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > >
> > > > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > > > you thank me the first time you have to fix something.
> > > > >
> > > > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > > > The structure visibility definitely needs to be addressed.
> > > > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> > > difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> > >
> > > Every function that is not in the direct datapath should not be inline.
> > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
> >
> > Plus synchronization routines: spin/rwlock/barrier, etc.
> > I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
> > (just a bit more sophisticated).
> > Konstantin
>
> If you look at the other userspace RCU, you wil see that the only inlines
> are the rcu_read_lock,rcu_read_unlock and rcu_reference/rcu_assign_pointer.
>
> The synchronization logic is all real functions.
In fact, I think urcu provides both flavors:
https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h
I still don't understand why we have to treat it differently then let say spin-lock/ticket-lock or rwlock.
If we gone all the way to create our own version of rcu, we probably want it to be as fast as possible
(I know that main speedup should come from the fact that readers don't have to wait for writer to finish, but still...)
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 17:39 ` Ananyev, Konstantin
@ 2019-04-15 17:39 ` Ananyev, Konstantin
2019-04-15 18:56 ` Honnappa Nagarahalli
2019-04-15 21:26 ` Stephen Hemminger
2 siblings, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-15 17:39 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Honnappa Nagarahalli, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Monday, April 15, 2019 4:39 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
>
> On Mon, 15 Apr 2019 12:24:47 +0000
> "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
>
> > > -----Original Message-----
> > > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > > Sent: Saturday, April 13, 2019 12:06 AM
> > > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> > > dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika
> Gupta
> > > <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> > >
> > > On Fri, 12 Apr 2019 22:24:45 +0000
> > > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > >
> > > > >
> > > > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > > > >
> > > > > > Add RCU library supporting quiescent state based memory reclamation
> > > > > method.
> > > > > > This library helps identify the quiescent state of the reader threads
> > > > > > so that the writers can free the memory associated with the lock less
> > > > > > data structures.
> > > > > >
> > > > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > >
> > > > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > > > you thank me the first time you have to fix something.
> > > > >
> > > > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > > > The structure visibility definitely needs to be addressed.
> > > > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> > > difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> > >
> > > Every function that is not in the direct datapath should not be inline.
> > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
> >
> > Plus synchronization routines: spin/rwlock/barrier, etc.
> > I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
> > (just a bit more sophisticated).
> > Konstantin
>
> If you look at the other userspace RCU, you wil see that the only inlines
> are the rcu_read_lock,rcu_read_unlock and rcu_reference/rcu_assign_pointer.
>
> The synchronization logic is all real functions.
In fact, I think urcu provides both flavors:
https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h
I still don't understand why we have to treat it differently then let say spin-lock/ticket-lock or rwlock.
If we gone all the way to create our own version of rcu, we probably want it to be as fast as possible
(I know that main speedup should come from the fact that readers don't have to wait for writer to finish, but still...)
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 17:39 ` Ananyev, Konstantin
2019-04-15 17:39 ` Ananyev, Konstantin
@ 2019-04-15 18:56 ` Honnappa Nagarahalli
2019-04-15 18:56 ` Honnappa Nagarahalli
2019-04-15 21:26 ` Stephen Hemminger
2 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-15 18:56 UTC (permalink / raw)
To: Ananyev, Konstantin, Stephen Hemminger
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > >
> > > > > > After evaluating long term API/ABI issues, I think you need to
> > > > > > get rid of almost all use of inline and visible structures.
> > > > > > Yes it might be marginally slower, but you thank me the first time
> you have to fix something.
> > > > > >
> > > > > Agree, I was planning on another version to address this (I am yet
> to take a look at your patch addressing the ABI).
> > > > > The structure visibility definitely needs to be addressed.
> > > > > For the inline functions, is the plan to convert all the inline
> > > > > functions in DPDK? If yes, I think we need to consider the
> > > > > performance
> > > > difference. May be consider L3-fwd application, change all the inline
> functions in its path and run a test?
> > > >
> > > > Every function that is not in the direct datapath should not be inline.
> > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and
> > > > packet alloc/free
> > >
> > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > I think rcu should be one of such exceptions - it is just another
> > > synchronization mechanism after all (just a bit more sophisticated).
> > > Konstantin
> >
> > If you look at the other userspace RCU, you wil see that the only
> > inlines are the rcu_read_lock,rcu_read_unlock and
> rcu_reference/rcu_assign_pointer.
> >
> > The synchronization logic is all real functions.
>
> In fact, I think urcu provides both flavors:
> https://github.com/urcu/userspace-
> rcu/blob/master/include/urcu/static/urcu-qsbr.h
> I still don't understand why we have to treat it differently then let say
> spin-lock/ticket-lock or rwlock.
> If we gone all the way to create our own version of rcu, we probably want
> it to be as fast as possible (I know that main speedup should come from
> the fact that readers don't have to wait for writer to finish, but still...)
>
Except for ' rte_rcu_qsbr_synchronize' (will correct in the next version), we have the correct APIs marked as inline. They all are part of the fast path.
> Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 18:56 ` Honnappa Nagarahalli
@ 2019-04-15 18:56 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-15 18:56 UTC (permalink / raw)
To: Ananyev, Konstantin, Stephen Hemminger
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > >
> > > > > > After evaluating long term API/ABI issues, I think you need to
> > > > > > get rid of almost all use of inline and visible structures.
> > > > > > Yes it might be marginally slower, but you thank me the first time
> you have to fix something.
> > > > > >
> > > > > Agree, I was planning on another version to address this (I am yet
> to take a look at your patch addressing the ABI).
> > > > > The structure visibility definitely needs to be addressed.
> > > > > For the inline functions, is the plan to convert all the inline
> > > > > functions in DPDK? If yes, I think we need to consider the
> > > > > performance
> > > > difference. May be consider L3-fwd application, change all the inline
> functions in its path and run a test?
> > > >
> > > > Every function that is not in the direct datapath should not be inline.
> > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and
> > > > packet alloc/free
> > >
> > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > I think rcu should be one of such exceptions - it is just another
> > > synchronization mechanism after all (just a bit more sophisticated).
> > > Konstantin
> >
> > If you look at the other userspace RCU, you wil see that the only
> > inlines are the rcu_read_lock,rcu_read_unlock and
> rcu_reference/rcu_assign_pointer.
> >
> > The synchronization logic is all real functions.
>
> In fact, I think urcu provides both flavors:
> https://github.com/urcu/userspace-
> rcu/blob/master/include/urcu/static/urcu-qsbr.h
> I still don't understand why we have to treat it differently then let say
> spin-lock/ticket-lock or rwlock.
> If we gone all the way to create our own version of rcu, we probably want
> it to be as fast as possible (I know that main speedup should come from
> the fact that readers don't have to wait for writer to finish, but still...)
>
Except for ' rte_rcu_qsbr_synchronize' (will correct in the next version), we have the correct APIs marked as inline. They all are part of the fast path.
> Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 16:51 ` Ananyev, Konstantin
2019-04-15 16:51 ` Ananyev, Konstantin
@ 2019-04-15 19:46 ` Honnappa Nagarahalli
2019-04-15 19:46 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-15 19:46 UTC (permalink / raw)
To: Ananyev, Konstantin, paulmck
Cc: stephen, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> > > > >
> > > > > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli
> > > wrote:
> > > > > > Add RCU library supporting quiescent state based memory
> > > > > > reclamation
> > > > > method.
> > > > > > This library helps identify the quiescent state of the reader
> > > > > > threads so that the writers can free the memory associated
> > > > > > with the lock less data structures.
> > > > >
> > > > > I don't see any sign of read-side markers (rcu_read_lock() and
> > > > > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> > > > >
> > > > > Yes, strictly speaking, these are not needed for QSBR to
> > > > > operate, but they
> > > > These APIs would be empty for QSBR.
> > > >
> > > > > make it way easier to maintain and debug code using RCU. For
> > > > > example, given the read-side markers, you can check for errors
> > > > > like having a call to
> > > > > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > > > > Without those read-side markers, life can be quite hard and you
> > > > > will really hate yourself for failing to have provided them.
> > > >
> > > > Want to make sure I understood this, do you mean the application
> > > would mark before and after accessing the shared data structure on
> > > the reader side?
> > > >
> > > > rte_rcu_qsbr_lock()
> > > > <begin access shared data structure> ...
> > > > ...
> > > > <end access shared data structure>
> > > > rte_rcu_qsbr_unlock()
> > >
> > > Yes, that is the idea.
> > >
> > > > If someone is debugging this code, they have to make sure that
> > > > there is
> > > an unlock for every lock and there is no call to
> > > rte_rcu_qsbr_quiescent in between.
> > > > It sounds good to me. Obviously, they will not add any additional
> > > > cycles
> > > as well.
> > > > Please let me know if my understanding is correct.
> > >
> > > Yes. And in some sort of debug mode, you could capture the counter
> > > at
> > > rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time.
> > > If the counter has advanced too far (more than one, if I am not too
> > > confused) there is a bug. Also in debug mode, you could have
> > > rte_rcu_qsbr_lock() increment a per-thread counter and
> rte_rcu_qsbr_unlock() decrement it.
> > > If the counter is non-zero at a quiescent state, there is a bug.
> > > And so on.
> > >
> > Added this in V5
> >
> > <snip>
> >
> > > > > > +
> > > > > > +/* Get the memory size of QSBR variable */ size_t
> > > > > > +__rte_experimental rte_rcu_qsbr_get_memsize(uint32_t
> > > max_threads) {
> > > > > > + size_t sz;
> > > > > > +
> > > > > > + if (max_threads == 0) {
> > > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > > + "%s(): Invalid max_threads %u\n",
> > > > > > + __func__, max_threads);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + sz = sizeof(struct rte_rcu_qsbr);
> > > > > > +
> > > > > > + /* Add the size of quiescent state counter array */
> > > > > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > > > > +
> > > > > > + /* Add the size of the registered thread ID bitmap array */
> > > > > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > > > > +
> > > > > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> > > > >
> > > > > Given that you align here, should you also align in the earlier
> > > > > steps in the computation of sz?
> > > >
> > > > Agree. I will remove the align here and keep the earlier one as
> > > > the intent
> > > is to align the thread ID array.
> > >
> > > Sounds good!
> > Added this in V5
> >
> > >
> > > > > > +}
> > > > > > +
> > > > > > +/* Initialize a quiescent state variable */ int
> > > > > > +__rte_experimental rte_rcu_qsbr_init(struct rte_rcu_qsbr *v,
> > > uint32_t max_threads) {
> > > > > > + size_t sz;
> > > > > > +
> > > > > > + if (v == NULL) {
> > > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > > > > + if (sz == 1)
> > > > > > + return 1;
> > > > > > +
> > > > > > + /* Set all the threads to offline */
> > > > > > + memset(v, 0, sz);
> > > > >
> > > > > We calculate sz here, but it looks like the caller must also
> > > > > calculate it in order to correctly allocate the memory
> > > > > referenced by the "v" argument to this function, with bad things
> > > > > happening if the two calculations get different results. Should
> > > > > "v" instead be allocated within this function to avoid this sort of
> problem?
> > > >
> > > > Earlier version allocated the memory with-in this library.
> > > > However, it was
> > > decided to go with the current implementation as it provides
> > > flexibility for the application to manage the memory as it sees fit.
> > > For ex: it could allocate this as part of another structure in a
> > > single allocation. This also falls inline with similar approach taken in
> other libraries.
> > >
> > > So the allocator APIs vary too much to allow a pointer to the
> > > desired allocator function to be passed in? Or do you also want to
> > > allow static allocation? If the latter, would a DEFINE_RTE_RCU_QSBR()
> be of use?
> > >
> > This is done to allow for allocation of memory for QS variable as part
> > of a another bigger data structure. This will help in not fragmenting the
> memory. For ex:
> >
> > struct xyz {
> > rte_ring *ring;
> > rte_rcu_qsbr *v;
> > abc *t;
> > };
> > struct xyz c;
> >
> > Memory for the above structure can be allocated in one chunk by
> calculating the size required.
> >
> > In some use cases static allocation might be enough as 'max_threads'
> > might be a compile time constant. I am not sure on how to support both
> dynamic and static 'max_threads'.
>
> Same thought here- would be good to have a static initializer
> (DEFINE_RTE_RCU_QSBR), but that means new compile time limit
> ('max_threads') - thing that we try to avoid.
>
> >
> > > > > > + v->max_threads = max_threads;
> > > > > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > > > > + v->token = RTE_QSBR_CNT_INIT;
> > > > > > +
> > > > > > + return 0;
> > > > > > +}
> > > > > > +
> > > > > > +/* Register a reader thread to report its quiescent state
> > > > > > + * on a QS variable.
> > > > > > + */
> > > > > > +int __rte_experimental
> > > > > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned
> > > > > > +int
> > > > > > +thread_id) {
> > > > > > + unsigned int i, id, success;
> > > > > > + uint64_t old_bmap, new_bmap;
> > > > > > +
> > > > > > + if (v == NULL || thread_id >= v->max_threads) {
> > > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > > > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > > +
> > > > > > + /* Make sure that the counter for registered threads does
> not
> > > > > > + * go out of sync. Hence, additional checks are required.
> > > > > > + */
> > > > > > + /* Check if the thread is already registered */
> > > > > > + old_bmap =
> __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > > + __ATOMIC_RELAXED);
> > > > > > + if (old_bmap & 1UL << id)
> > > > > > + return 0;
> > > > > > +
> > > > > > + do {
> > > > > > + new_bmap = old_bmap | (1UL << id);
> > > > > > + success = __atomic_compare_exchange(
> > > > > > +
> RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > > + &old_bmap, &new_bmap, 0,
> > > > > > + __ATOMIC_RELEASE,
> > > > > __ATOMIC_RELAXED);
> > > > > > +
> > > > > > + if (success)
> > > > > > + __atomic_fetch_add(&v->num_threads,
> > > > > > + 1,
> __ATOMIC_RELAXED);
> > > > > > + else if (old_bmap & (1UL << id))
> > > > > > + /* Someone else registered this thread.
> > > > > > + * Counter should not be incremented.
> > > > > > + */
> > > > > > + return 0;
> > > > > > + } while (success == 0);
> > > > >
> > > > > This would be simpler if threads were required to register
> themselves.
> > > > > Maybe you have use cases requiring registration of other
> > > > > threads, but this capability is adding significant complexity,
> > > > > so it might be worth some thought.
> > > > >
> > > > It was simple earlier (__atomic_fetch_or). The complexity is added
> > > > as
> > > 'num_threads' should not go out of sync.
> > >
> > > Hmmm...
> > >
> > > So threads are allowed to register other threads? Or is there some
> > > other reason that concurrent registration is required?
> > >
> > Yes, control plane threads can register the fast path threads. Though,
> > I am not sure how useful it is. I did not want to add the restriction. I
> expect that reader threads will register themselves. The reader threads
> require concurrent registration as they all will be running in parallel.
> > If the requirement of keeping track of the number of threads registered
> currently goes away, then this function will be simple.
> >
> > <snip>
> >
> > > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > > > 000000000..ff696aeab
> > > > > > --- /dev/null
> > > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > > @@ -0,0 +1,554 @@
> > > > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > > > + * Copyright (c) 2018 Arm Limited */
> > > > > > +
> > > > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > > > +#define _RTE_RCU_QSBR_H_
> > > > > > +
> > > > > > +/**
> > > > > > + * @file
> > > > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > > > + *
> > > > > > + * Quiescent State (QS) is any point in the thread execution
> > > > > > + * where the thread does not hold a reference to a data
> > > > > > +structure
> > > > > > + * in shared memory. While using lock-less data structures,
> > > > > > +the writer
> > > > > > + * can safely free memory once all the reader threads have
> > > > > > +entered
> > > > > > + * quiescent state.
> > > > > > + *
> > > > > > + * This library provides the ability for the readers to
> > > > > > +report quiescent
> > > > > > + * state and for the writers to identify when all the readers
> > > > > > +have
> > > > > > + * entered quiescent state.
> > > > > > + */
> > > > > > +
> > > > > > +#ifdef __cplusplus
> > > > > > +extern "C" {
> > > > > > +#endif
> > > > > > +
> > > > > > +#include <stdio.h>
> > > > > > +#include <stdint.h>
> > > > > > +#include <errno.h>
> > > > > > +#include <rte_common.h>
> > > > > > +#include <rte_memory.h>
> > > > > > +#include <rte_lcore.h>
> > > > > > +#include <rte_debug.h>
> > > > > > +#include <rte_atomic.h>
> > > > > > +
> > > > > > +extern int rcu_log_type;
> > > > > > +
> > > > > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define
> > > RCU_DP_LOG(level,
> > > > > fmt,
> > > > > > +args...) \
> > > > > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > > > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > > > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > > > > +
> > > > > > +/* Registered thread IDs are stored as a bitmap of 64b
> > > > > > +element
> > > array.
> > > > > > + * Given thread id needs to be converted to index into the
> > > > > > +array and
> > > > > > + * the id within the array element.
> > > > > > + */
> > > > > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > > > > #define
> > > > > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > > > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > > > > RTE_CACHE_LINE_SIZE) #define
> > > > > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > > > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> > > > > > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define
> > > RTE_QSBR_THRID_MASK
> > > > > > +0x3f
> > > > > #define
> > > > > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > > > > +
> > > > > > +/* Worker thread counter */
> > > > > > +struct rte_rcu_qsbr_cnt {
> > > > > > + uint64_t cnt;
> > > > > > + /**< Quiescent state counter. Value 0 indicates the thread
> > > > > > +is offline */ } __rte_cache_aligned;
> > > > > > +
> > > > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define
> > > RTE_QSBR_CNT_INIT 1
> > > > > > +
> > > > > > +/* RTE Quiescent State variable structure.
> > > > > > + * This structure has two elements that vary in size based on
> > > > > > +the
> > > > > > + * 'max_threads' parameter.
> > > > > > + * 1) Quiescent state counter array
> > > > > > + * 2) Register thread ID array */ struct rte_rcu_qsbr {
> > > > > > + uint64_t token __rte_cache_aligned;
> > > > > > + /**< Counter to allow for multiple concurrent quiescent
> > > > > > +state queries */
> > > > > > +
> > > > > > + uint32_t num_elems __rte_cache_aligned;
> > > > > > + /**< Number of elements in the thread ID array */
> > > > > > + uint32_t num_threads;
> > > > > > + /**< Number of threads currently using this QS variable */
> > > > > > + uint32_t max_threads;
> > > > > > + /**< Maximum number of threads using this QS variable */
> > > > > > +
> > > > > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > > > > + /**< Quiescent state counter array of 'max_threads'
> elements
> > > > > > +*/
> > > > > > +
> > > > > > + /**< Registered thread IDs are stored in a bitmap array,
> > > > > > + * after the quiescent state counter array.
> > > > > > + */
> > > > > > +} __rte_cache_aligned;
> > > > > > +
> >
> > <snip>
> >
> > > > > > +
> > > > > > +/* Check the quiescent state counter for registered threads
> > > > > > +only, assuming
> > > > > > + * that not all threads have registered.
> > > > > > + */
> > > > > > +static __rte_always_inline int
> > > > > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t
> > > > > > +t, bool
> > > > > > +wait) {
> > > > > > + uint32_t i, j, id;
> > > > > > + uint64_t bmap;
> > > > > > + uint64_t c;
> > > > > > + uint64_t *reg_thread_id;
> > > > > > +
> > > > > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v,
> 0);
> > > > > > + i < v->num_elems;
> > > > > > + i++, reg_thread_id++) {
> > > > > > + /* Load the current registered thread bit map
> before
> > > > > > + * loading the reader thread quiescent state
> counters.
> > > > > > + */
> > > > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > > __ATOMIC_ACQUIRE);
> > > > > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > > +
> > > > > > + while (bmap) {
> > > > > > + j = __builtin_ctzl(bmap);
> > > > > > + RCU_DP_LOG(DEBUG,
> > > > > > + "%s: check: token = %lu, wait = %d,
> Bit Map
> > > > > = 0x%lx, Thread ID = %d",
> > > > > > + __func__, t, wait, bmap, id + j);
> > > > > > + c = __atomic_load_n(
> > > > > > + &v->qsbr_cnt[id + j].cnt,
> > > > > > + __ATOMIC_ACQUIRE);
> > > > > > + RCU_DP_LOG(DEBUG,
> > > > > > + "%s: status: token = %lu, wait = %d,
> Thread
> > > > > QS cnt = %lu, Thread ID = %d",
> > > > > > + __func__, t, wait, c, id+j);
> > > > > > + /* Counter is not checked for wrap-around
> > > > > condition
> > > > > > + * as it is a 64b counter.
> > > > > > + */
> > > > > > + if (unlikely(c !=
> RTE_QSBR_CNT_THR_OFFLINE && c
> > > > > < t)) {
> > > > >
> > > > > This assumes that a 64-bit counter won't overflow, which is
> > > > > close enough to true given current CPU clock frequencies. ;-)
> > > > >
> > > > > > + /* This thread is not in quiescent
> state */
> > > > > > + if (!wait)
> > > > > > + return 0;
> > > > > > +
> > > > > > + rte_pause();
> > > > > > + /* This thread might have
> unregistered.
> > > > > > + * Re-read the bitmap.
> > > > > > + */
> > > > > > + bmap =
> __atomic_load_n(reg_thread_id,
> > > > > > + __ATOMIC_ACQUIRE);
> > > > > > +
> > > > > > + continue;
> > > > > > + }
> > > > > > +
> > > > > > + bmap &= ~(1UL << j);
> > > > > > + }
> > > > > > + }
> > > > > > +
> > > > > > + return 1;
> > > > > > +}
> > > > > > +
> > > > > > +/* Check the quiescent state counter for all threads,
> > > > > > +assuming that
> > > > > > + * all the threads have registered.
> > > > > > + */
> > > > > > +static __rte_always_inline int __rcu_qsbr_check_all(struct
> > > > > > +rte_rcu_qsbr *v, uint64_t t, bool
> > > > > > +wait)
> > > > >
> > > > > Does checking the bitmap really take long enough to make this
> > > > > worthwhile as a separate function? I would think that the
> > > > > bitmap-checking time would be lost in the noise of cache misses
> > > > > from
> > > the ->cnt loads.
> > > >
> > > > It avoids accessing one cache line. I think this is where the
> > > > savings are
> > > (may be in theory). This is the most probable use case.
> > > > On the other hand, __rcu_qsbr_check_selective() will result in
> > > > savings
> > > (depending on how many threads are currently registered) by avoiding
> > > accessing unwanted counters.
> > >
> > > Do you really expect to be calling this function on any kind of fastpath?
> >
> > Yes. For some of the libraries (rte_hash), the writer is on the fast path.
> >
> > >
> > > > > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop
> > > > > in the absence of readers, you might see __rcu_qsbr_check_all()
> > > > > being a bit faster. But is that really what DPDK does?
> > > > I see improvements in the synthetic test case (similar to the one
> > > > you
> > > have described, around 27%). However, in the more practical test
> > > cases I do not see any difference.
> > >
> > > If the performance improvement only occurs in a synthetic test case,
> > > does it really make sense to optimize for it?
> > I had to fix few issues in the performance test cases and added more to
> do the comparison. These changes are in v5.
> > There are 4 performance tests involving this API.
> > 1) 1 Writer, 'N' readers
> > Writer: qsbr_start, qsbr_check(wait = true)
> > Readers: qsbr_quiescent
> > 2) 'N' writers
> > Writers: qsbr_start, qsbr_check(wait == false)
> > 3) 1 Writer, 'N' readers (this test uses the lock-free rte_hash data
> structure)
> > Writer: hash_del, qsbr_start, qsbr_check(wait = true), validate that
> the reader was able to complete its work successfully
> > Readers: thread_online, hash_lookup, access the pointer - do some
> > work on it, qsbr_quiescent, thread_offline
> > 4) Same as test 3) but qsbr_check (wait == false)
> >
> > There are 2 sets of these tests.
> > a) QS variable is created with number of threads same as number of
> > readers - this will exercise __rcu_qsbr_check_all
> > b) QS variable is created with 128 threads, number of registered
> > threads is same as in a) - this will exercise
> > __rcu_qsbr_check_selective
> >
> > Following are the results on x86 (E5-2660 v4 @ 2.00GHz) comparing from
> > a) to b) (on x86 in my setup, the results are not very stable between
> > runs)
> > 1) 25%
> > 2) -3%
> > 3) -0.4%
> > 4) 1.38%
> >
> > Following are the results on an Arm system comparing from a) to b)
> > (results are not pretty stable between runs)
^^^
Correction, on the Arm system, the results *are* stable (copy-paste error)
> > 1) -3.45%
> > 2) 0%
> > 3) -0.03%
> > 4) -0.04%
> >
> > Konstantin, is it possible to run the tests on your setup and look at the
> results?
>
> I did run V5 on my box (SKX 2.1 GHz) with 17 lcores (1 physical core per
> thread).
> Didn't notice any siginifcatn fluctuations between runs, output below.
>
> >rcu_qsbr_perf_autotesESC[0Kt
> Number of cores provided = 17
> Perf test with all reader threads registered
> --------------------------------------------
>
> Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true) Total RCU
> updates = 65707232899 Cycles per 1000 updates: 18482 Total RCU checks =
> 20000000 Cycles per 1000 checks: 3794991
>
> Perf Test: 17 Readers
> Total RCU updates = 1700000000
> Cycles per 1000 updates: 2128
>
> Perf test: 17 Writers ('wait' in qsbr_check == false) Total RCU checks =
> 340000000 Cycles per 1000 checks: 10030
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking
> QSBR Check Following numbers include calls to rte_hash functions Cycles
> per 1 update(online/update/offline): 1984696 Cycles per 1 check(start,
> check): 2619002
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-
> Blocking QSBR check Following numbers include calls to rte_hash functions
> Cycles per 1 update(online/update/offline): 2028030 Cycles per 1
> check(start, check): 2876667
>
> Perf test with some of reader threads registered
> ------------------------------------------------
>
> Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true) Total RCU
> updates = 68850073055 Cycles per 1000 updates: 25490 Total RCU checks =
> 20000000 Cycles per 1000 checks: 5484403
>
> Perf Test: 17 Readers
> Total RCU updates = 1700000000
> Cycles per 1000 updates: 2127
>
> Perf test: 17 Writers ('wait' in qsbr_check == false) Total RCU checks =
> 340000000 Cycles per 1000 checks: 10034
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking
> QSBR Check Following numbers include calls to rte_hash functions Cycles
> per 1 update(online/update/offline): 3604489 Cycles per 1 check(start,
> check): 7077372
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-
> Blocking QSBR check Following numbers include calls to rte_hash functions
> Cycles per 1 update(online/update/offline): 3936831 Cycles per 1
> check(start, check): 7262738
>
>
> Test OK
Thanks for running the test. From the numbers, the comparison is as follows:
1) -44%
2) 0.03%
3) -170%
4) -152%
Trend is the same between x86 and Arm. However, x86 has drastic improvement with __rcu_qsbr_check_all function.
>
> Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 19:46 ` Honnappa Nagarahalli
@ 2019-04-15 19:46 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-15 19:46 UTC (permalink / raw)
To: Ananyev, Konstantin, paulmck
Cc: stephen, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> > > > >
> > > > > On Wed, Apr 10, 2019 at 06:20:04AM -0500, Honnappa Nagarahalli
> > > wrote:
> > > > > > Add RCU library supporting quiescent state based memory
> > > > > > reclamation
> > > > > method.
> > > > > > This library helps identify the quiescent state of the reader
> > > > > > threads so that the writers can free the memory associated
> > > > > > with the lock less data structures.
> > > > >
> > > > > I don't see any sign of read-side markers (rcu_read_lock() and
> > > > > rcu_read_unlock() in the Linux kernel, userspace RCU, etc.).
> > > > >
> > > > > Yes, strictly speaking, these are not needed for QSBR to
> > > > > operate, but they
> > > > These APIs would be empty for QSBR.
> > > >
> > > > > make it way easier to maintain and debug code using RCU. For
> > > > > example, given the read-side markers, you can check for errors
> > > > > like having a call to
> > > > > rte_rcu_qsbr_quiescent() in the middle of a reader quite easily.
> > > > > Without those read-side markers, life can be quite hard and you
> > > > > will really hate yourself for failing to have provided them.
> > > >
> > > > Want to make sure I understood this, do you mean the application
> > > would mark before and after accessing the shared data structure on
> > > the reader side?
> > > >
> > > > rte_rcu_qsbr_lock()
> > > > <begin access shared data structure> ...
> > > > ...
> > > > <end access shared data structure>
> > > > rte_rcu_qsbr_unlock()
> > >
> > > Yes, that is the idea.
> > >
> > > > If someone is debugging this code, they have to make sure that
> > > > there is
> > > an unlock for every lock and there is no call to
> > > rte_rcu_qsbr_quiescent in between.
> > > > It sounds good to me. Obviously, they will not add any additional
> > > > cycles
> > > as well.
> > > > Please let me know if my understanding is correct.
> > >
> > > Yes. And in some sort of debug mode, you could capture the counter
> > > at
> > > rte_rcu_qsbr_lock() time and check it at rte_rcu_qsbr_unlock() time.
> > > If the counter has advanced too far (more than one, if I am not too
> > > confused) there is a bug. Also in debug mode, you could have
> > > rte_rcu_qsbr_lock() increment a per-thread counter and
> rte_rcu_qsbr_unlock() decrement it.
> > > If the counter is non-zero at a quiescent state, there is a bug.
> > > And so on.
> > >
> > Added this in V5
> >
> > <snip>
> >
> > > > > > +
> > > > > > +/* Get the memory size of QSBR variable */ size_t
> > > > > > +__rte_experimental rte_rcu_qsbr_get_memsize(uint32_t
> > > max_threads) {
> > > > > > + size_t sz;
> > > > > > +
> > > > > > + if (max_threads == 0) {
> > > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > > + "%s(): Invalid max_threads %u\n",
> > > > > > + __func__, max_threads);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + sz = sizeof(struct rte_rcu_qsbr);
> > > > > > +
> > > > > > + /* Add the size of quiescent state counter array */
> > > > > > + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> > > > > > +
> > > > > > + /* Add the size of the registered thread ID bitmap array */
> > > > > > + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> > > > > > +
> > > > > > + return RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
> > > > >
> > > > > Given that you align here, should you also align in the earlier
> > > > > steps in the computation of sz?
> > > >
> > > > Agree. I will remove the align here and keep the earlier one as
> > > > the intent
> > > is to align the thread ID array.
> > >
> > > Sounds good!
> > Added this in V5
> >
> > >
> > > > > > +}
> > > > > > +
> > > > > > +/* Initialize a quiescent state variable */ int
> > > > > > +__rte_experimental rte_rcu_qsbr_init(struct rte_rcu_qsbr *v,
> > > uint32_t max_threads) {
> > > > > > + size_t sz;
> > > > > > +
> > > > > > + if (v == NULL) {
> > > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + sz = rte_rcu_qsbr_get_memsize(max_threads);
> > > > > > + if (sz == 1)
> > > > > > + return 1;
> > > > > > +
> > > > > > + /* Set all the threads to offline */
> > > > > > + memset(v, 0, sz);
> > > > >
> > > > > We calculate sz here, but it looks like the caller must also
> > > > > calculate it in order to correctly allocate the memory
> > > > > referenced by the "v" argument to this function, with bad things
> > > > > happening if the two calculations get different results. Should
> > > > > "v" instead be allocated within this function to avoid this sort of
> problem?
> > > >
> > > > Earlier version allocated the memory with-in this library.
> > > > However, it was
> > > decided to go with the current implementation as it provides
> > > flexibility for the application to manage the memory as it sees fit.
> > > For ex: it could allocate this as part of another structure in a
> > > single allocation. This also falls inline with similar approach taken in
> other libraries.
> > >
> > > So the allocator APIs vary too much to allow a pointer to the
> > > desired allocator function to be passed in? Or do you also want to
> > > allow static allocation? If the latter, would a DEFINE_RTE_RCU_QSBR()
> be of use?
> > >
> > This is done to allow for allocation of memory for QS variable as part
> > of a another bigger data structure. This will help in not fragmenting the
> memory. For ex:
> >
> > struct xyz {
> > rte_ring *ring;
> > rte_rcu_qsbr *v;
> > abc *t;
> > };
> > struct xyz c;
> >
> > Memory for the above structure can be allocated in one chunk by
> calculating the size required.
> >
> > In some use cases static allocation might be enough as 'max_threads'
> > might be a compile time constant. I am not sure on how to support both
> dynamic and static 'max_threads'.
>
> Same thought here- would be good to have a static initializer
> (DEFINE_RTE_RCU_QSBR), but that means new compile time limit
> ('max_threads') - thing that we try to avoid.
>
> >
> > > > > > + v->max_threads = max_threads;
> > > > > > + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> > > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> > > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> > > > > > + v->token = RTE_QSBR_CNT_INIT;
> > > > > > +
> > > > > > + return 0;
> > > > > > +}
> > > > > > +
> > > > > > +/* Register a reader thread to report its quiescent state
> > > > > > + * on a QS variable.
> > > > > > + */
> > > > > > +int __rte_experimental
> > > > > > +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned
> > > > > > +int
> > > > > > +thread_id) {
> > > > > > + unsigned int i, id, success;
> > > > > > + uint64_t old_bmap, new_bmap;
> > > > > > +
> > > > > > + if (v == NULL || thread_id >= v->max_threads) {
> > > > > > + rte_log(RTE_LOG_ERR, rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + id = thread_id & RTE_QSBR_THRID_MASK;
> > > > > > + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > > +
> > > > > > + /* Make sure that the counter for registered threads does
> not
> > > > > > + * go out of sync. Hence, additional checks are required.
> > > > > > + */
> > > > > > + /* Check if the thread is already registered */
> > > > > > + old_bmap =
> __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > > + __ATOMIC_RELAXED);
> > > > > > + if (old_bmap & 1UL << id)
> > > > > > + return 0;
> > > > > > +
> > > > > > + do {
> > > > > > + new_bmap = old_bmap | (1UL << id);
> > > > > > + success = __atomic_compare_exchange(
> > > > > > +
> RTE_QSBR_THRID_ARRAY_ELM(v, i),
> > > > > > + &old_bmap, &new_bmap, 0,
> > > > > > + __ATOMIC_RELEASE,
> > > > > __ATOMIC_RELAXED);
> > > > > > +
> > > > > > + if (success)
> > > > > > + __atomic_fetch_add(&v->num_threads,
> > > > > > + 1,
> __ATOMIC_RELAXED);
> > > > > > + else if (old_bmap & (1UL << id))
> > > > > > + /* Someone else registered this thread.
> > > > > > + * Counter should not be incremented.
> > > > > > + */
> > > > > > + return 0;
> > > > > > + } while (success == 0);
> > > > >
> > > > > This would be simpler if threads were required to register
> themselves.
> > > > > Maybe you have use cases requiring registration of other
> > > > > threads, but this capability is adding significant complexity,
> > > > > so it might be worth some thought.
> > > > >
> > > > It was simple earlier (__atomic_fetch_or). The complexity is added
> > > > as
> > > 'num_threads' should not go out of sync.
> > >
> > > Hmmm...
> > >
> > > So threads are allowed to register other threads? Or is there some
> > > other reason that concurrent registration is required?
> > >
> > Yes, control plane threads can register the fast path threads. Though,
> > I am not sure how useful it is. I did not want to add the restriction. I
> expect that reader threads will register themselves. The reader threads
> require concurrent registration as they all will be running in parallel.
> > If the requirement of keeping track of the number of threads registered
> currently goes away, then this function will be simple.
> >
> > <snip>
> >
> > > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > > > > > 000000000..ff696aeab
> > > > > > --- /dev/null
> > > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > > @@ -0,0 +1,554 @@
> > > > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > > > + * Copyright (c) 2018 Arm Limited */
> > > > > > +
> > > > > > +#ifndef _RTE_RCU_QSBR_H_
> > > > > > +#define _RTE_RCU_QSBR_H_
> > > > > > +
> > > > > > +/**
> > > > > > + * @file
> > > > > > + * RTE Quiescent State Based Reclamation (QSBR)
> > > > > > + *
> > > > > > + * Quiescent State (QS) is any point in the thread execution
> > > > > > + * where the thread does not hold a reference to a data
> > > > > > +structure
> > > > > > + * in shared memory. While using lock-less data structures,
> > > > > > +the writer
> > > > > > + * can safely free memory once all the reader threads have
> > > > > > +entered
> > > > > > + * quiescent state.
> > > > > > + *
> > > > > > + * This library provides the ability for the readers to
> > > > > > +report quiescent
> > > > > > + * state and for the writers to identify when all the readers
> > > > > > +have
> > > > > > + * entered quiescent state.
> > > > > > + */
> > > > > > +
> > > > > > +#ifdef __cplusplus
> > > > > > +extern "C" {
> > > > > > +#endif
> > > > > > +
> > > > > > +#include <stdio.h>
> > > > > > +#include <stdint.h>
> > > > > > +#include <errno.h>
> > > > > > +#include <rte_common.h>
> > > > > > +#include <rte_memory.h>
> > > > > > +#include <rte_lcore.h>
> > > > > > +#include <rte_debug.h>
> > > > > > +#include <rte_atomic.h>
> > > > > > +
> > > > > > +extern int rcu_log_type;
> > > > > > +
> > > > > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define
> > > RCU_DP_LOG(level,
> > > > > fmt,
> > > > > > +args...) \
> > > > > > + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> > > > > > + "%s(): " fmt "\n", __func__, ## args) #else #define
> > > > > > +RCU_DP_LOG(level, fmt, args...) #endif
> > > > > > +
> > > > > > +/* Registered thread IDs are stored as a bitmap of 64b
> > > > > > +element
> > > array.
> > > > > > + * Given thread id needs to be converted to index into the
> > > > > > +array and
> > > > > > + * the id within the array element.
> > > > > > + */
> > > > > > +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> > > > > #define
> > > > > > +RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> > > > > > + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> > > > > > + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3,
> > > > > RTE_CACHE_LINE_SIZE) #define
> > > > > > +RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> > > > > > + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> > > > > > +#define RTE_QSBR_THRID_INDEX_SHIFT 6 #define
> > > RTE_QSBR_THRID_MASK
> > > > > > +0x3f
> > > > > #define
> > > > > > +RTE_QSBR_THRID_INVALID 0xffffffff
> > > > > > +
> > > > > > +/* Worker thread counter */
> > > > > > +struct rte_rcu_qsbr_cnt {
> > > > > > + uint64_t cnt;
> > > > > > + /**< Quiescent state counter. Value 0 indicates the thread
> > > > > > +is offline */ } __rte_cache_aligned;
> > > > > > +
> > > > > > +#define RTE_QSBR_CNT_THR_OFFLINE 0 #define
> > > RTE_QSBR_CNT_INIT 1
> > > > > > +
> > > > > > +/* RTE Quiescent State variable structure.
> > > > > > + * This structure has two elements that vary in size based on
> > > > > > +the
> > > > > > + * 'max_threads' parameter.
> > > > > > + * 1) Quiescent state counter array
> > > > > > + * 2) Register thread ID array */ struct rte_rcu_qsbr {
> > > > > > + uint64_t token __rte_cache_aligned;
> > > > > > + /**< Counter to allow for multiple concurrent quiescent
> > > > > > +state queries */
> > > > > > +
> > > > > > + uint32_t num_elems __rte_cache_aligned;
> > > > > > + /**< Number of elements in the thread ID array */
> > > > > > + uint32_t num_threads;
> > > > > > + /**< Number of threads currently using this QS variable */
> > > > > > + uint32_t max_threads;
> > > > > > + /**< Maximum number of threads using this QS variable */
> > > > > > +
> > > > > > + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> > > > > > + /**< Quiescent state counter array of 'max_threads'
> elements
> > > > > > +*/
> > > > > > +
> > > > > > + /**< Registered thread IDs are stored in a bitmap array,
> > > > > > + * after the quiescent state counter array.
> > > > > > + */
> > > > > > +} __rte_cache_aligned;
> > > > > > +
> >
> > <snip>
> >
> > > > > > +
> > > > > > +/* Check the quiescent state counter for registered threads
> > > > > > +only, assuming
> > > > > > + * that not all threads have registered.
> > > > > > + */
> > > > > > +static __rte_always_inline int
> > > > > > +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t
> > > > > > +t, bool
> > > > > > +wait) {
> > > > > > + uint32_t i, j, id;
> > > > > > + uint64_t bmap;
> > > > > > + uint64_t c;
> > > > > > + uint64_t *reg_thread_id;
> > > > > > +
> > > > > > + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v,
> 0);
> > > > > > + i < v->num_elems;
> > > > > > + i++, reg_thread_id++) {
> > > > > > + /* Load the current registered thread bit map
> before
> > > > > > + * loading the reader thread quiescent state
> counters.
> > > > > > + */
> > > > > > + bmap = __atomic_load_n(reg_thread_id,
> > > > > __ATOMIC_ACQUIRE);
> > > > > > + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> > > > > > +
> > > > > > + while (bmap) {
> > > > > > + j = __builtin_ctzl(bmap);
> > > > > > + RCU_DP_LOG(DEBUG,
> > > > > > + "%s: check: token = %lu, wait = %d,
> Bit Map
> > > > > = 0x%lx, Thread ID = %d",
> > > > > > + __func__, t, wait, bmap, id + j);
> > > > > > + c = __atomic_load_n(
> > > > > > + &v->qsbr_cnt[id + j].cnt,
> > > > > > + __ATOMIC_ACQUIRE);
> > > > > > + RCU_DP_LOG(DEBUG,
> > > > > > + "%s: status: token = %lu, wait = %d,
> Thread
> > > > > QS cnt = %lu, Thread ID = %d",
> > > > > > + __func__, t, wait, c, id+j);
> > > > > > + /* Counter is not checked for wrap-around
> > > > > condition
> > > > > > + * as it is a 64b counter.
> > > > > > + */
> > > > > > + if (unlikely(c !=
> RTE_QSBR_CNT_THR_OFFLINE && c
> > > > > < t)) {
> > > > >
> > > > > This assumes that a 64-bit counter won't overflow, which is
> > > > > close enough to true given current CPU clock frequencies. ;-)
> > > > >
> > > > > > + /* This thread is not in quiescent
> state */
> > > > > > + if (!wait)
> > > > > > + return 0;
> > > > > > +
> > > > > > + rte_pause();
> > > > > > + /* This thread might have
> unregistered.
> > > > > > + * Re-read the bitmap.
> > > > > > + */
> > > > > > + bmap =
> __atomic_load_n(reg_thread_id,
> > > > > > + __ATOMIC_ACQUIRE);
> > > > > > +
> > > > > > + continue;
> > > > > > + }
> > > > > > +
> > > > > > + bmap &= ~(1UL << j);
> > > > > > + }
> > > > > > + }
> > > > > > +
> > > > > > + return 1;
> > > > > > +}
> > > > > > +
> > > > > > +/* Check the quiescent state counter for all threads,
> > > > > > +assuming that
> > > > > > + * all the threads have registered.
> > > > > > + */
> > > > > > +static __rte_always_inline int __rcu_qsbr_check_all(struct
> > > > > > +rte_rcu_qsbr *v, uint64_t t, bool
> > > > > > +wait)
> > > > >
> > > > > Does checking the bitmap really take long enough to make this
> > > > > worthwhile as a separate function? I would think that the
> > > > > bitmap-checking time would be lost in the noise of cache misses
> > > > > from
> > > the ->cnt loads.
> > > >
> > > > It avoids accessing one cache line. I think this is where the
> > > > savings are
> > > (may be in theory). This is the most probable use case.
> > > > On the other hand, __rcu_qsbr_check_selective() will result in
> > > > savings
> > > (depending on how many threads are currently registered) by avoiding
> > > accessing unwanted counters.
> > >
> > > Do you really expect to be calling this function on any kind of fastpath?
> >
> > Yes. For some of the libraries (rte_hash), the writer is on the fast path.
> >
> > >
> > > > > Sure, if you invoke __rcu_qsbr_check_selective() in a tight loop
> > > > > in the absence of readers, you might see __rcu_qsbr_check_all()
> > > > > being a bit faster. But is that really what DPDK does?
> > > > I see improvements in the synthetic test case (similar to the one
> > > > you
> > > have described, around 27%). However, in the more practical test
> > > cases I do not see any difference.
> > >
> > > If the performance improvement only occurs in a synthetic test case,
> > > does it really make sense to optimize for it?
> > I had to fix few issues in the performance test cases and added more to
> do the comparison. These changes are in v5.
> > There are 4 performance tests involving this API.
> > 1) 1 Writer, 'N' readers
> > Writer: qsbr_start, qsbr_check(wait = true)
> > Readers: qsbr_quiescent
> > 2) 'N' writers
> > Writers: qsbr_start, qsbr_check(wait == false)
> > 3) 1 Writer, 'N' readers (this test uses the lock-free rte_hash data
> structure)
> > Writer: hash_del, qsbr_start, qsbr_check(wait = true), validate that
> the reader was able to complete its work successfully
> > Readers: thread_online, hash_lookup, access the pointer - do some
> > work on it, qsbr_quiescent, thread_offline
> > 4) Same as test 3) but qsbr_check (wait == false)
> >
> > There are 2 sets of these tests.
> > a) QS variable is created with number of threads same as number of
> > readers - this will exercise __rcu_qsbr_check_all
> > b) QS variable is created with 128 threads, number of registered
> > threads is same as in a) - this will exercise
> > __rcu_qsbr_check_selective
> >
> > Following are the results on x86 (E5-2660 v4 @ 2.00GHz) comparing from
> > a) to b) (on x86 in my setup, the results are not very stable between
> > runs)
> > 1) 25%
> > 2) -3%
> > 3) -0.4%
> > 4) 1.38%
> >
> > Following are the results on an Arm system comparing from a) to b)
> > (results are not pretty stable between runs)
^^^
Correction, on the Arm system, the results *are* stable (copy-paste error)
> > 1) -3.45%
> > 2) 0%
> > 3) -0.03%
> > 4) -0.04%
> >
> > Konstantin, is it possible to run the tests on your setup and look at the
> results?
>
> I did run V5 on my box (SKX 2.1 GHz) with 17 lcores (1 physical core per
> thread).
> Didn't notice any siginifcatn fluctuations between runs, output below.
>
> >rcu_qsbr_perf_autotesESC[0Kt
> Number of cores provided = 17
> Perf test with all reader threads registered
> --------------------------------------------
>
> Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true) Total RCU
> updates = 65707232899 Cycles per 1000 updates: 18482 Total RCU checks =
> 20000000 Cycles per 1000 checks: 3794991
>
> Perf Test: 17 Readers
> Total RCU updates = 1700000000
> Cycles per 1000 updates: 2128
>
> Perf test: 17 Writers ('wait' in qsbr_check == false) Total RCU checks =
> 340000000 Cycles per 1000 checks: 10030
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking
> QSBR Check Following numbers include calls to rte_hash functions Cycles
> per 1 update(online/update/offline): 1984696 Cycles per 1 check(start,
> check): 2619002
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-
> Blocking QSBR check Following numbers include calls to rte_hash functions
> Cycles per 1 update(online/update/offline): 2028030 Cycles per 1
> check(start, check): 2876667
>
> Perf test with some of reader threads registered
> ------------------------------------------------
>
> Perf Test: 16 Readers/1 Writer('wait' in qsbr_check == true) Total RCU
> updates = 68850073055 Cycles per 1000 updates: 25490 Total RCU checks =
> 20000000 Cycles per 1000 checks: 5484403
>
> Perf Test: 17 Readers
> Total RCU updates = 1700000000
> Cycles per 1000 updates: 2127
>
> Perf test: 17 Writers ('wait' in qsbr_check == false) Total RCU checks =
> 340000000 Cycles per 1000 checks: 10034
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Blocking
> QSBR Check Following numbers include calls to rte_hash functions Cycles
> per 1 update(online/update/offline): 3604489 Cycles per 1 check(start,
> check): 7077372
>
> Perf test: 1 writer, 17 readers, 1 QSBR variable, 1 QSBR Query, Non-
> Blocking QSBR check Following numbers include calls to rte_hash functions
> Cycles per 1 update(online/update/offline): 3936831 Cycles per 1
> check(start, check): 7262738
>
>
> Test OK
Thanks for running the test. From the numbers, the comparison is as follows:
1) -44%
2) 0.03%
3) -170%
4) -152%
Trend is the same between x86 and Arm. However, x86 has drastic improvement with __rcu_qsbr_check_all function.
>
> Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 17:39 ` Ananyev, Konstantin
2019-04-15 17:39 ` Ananyev, Konstantin
2019-04-15 18:56 ` Honnappa Nagarahalli
@ 2019-04-15 21:26 ` Stephen Hemminger
2019-04-15 21:26 ` Stephen Hemminger
2019-04-16 5:29 ` Honnappa Nagarahalli
2 siblings, 2 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-15 21:26 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Honnappa Nagarahalli, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Mon, 15 Apr 2019 17:39:07 +0000
"Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Monday, April 15, 2019 4:39 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; paulmck@linux.ibm.com; Kovacevic, Marko
> > <marko.kovacevic@intel.com>; dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> > <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> >
> > On Mon, 15 Apr 2019 12:24:47 +0000
> > "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> >
> > > > -----Original Message-----
> > > > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > > > Sent: Saturday, April 13, 2019 12:06 AM
> > > > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> > > > dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika
> > Gupta
> > > > <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > > > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> > > >
> > > > On Fri, 12 Apr 2019 22:24:45 +0000
> > > > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > > >
> > > > > >
> > > > > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > > > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > > > > >
> > > > > > > Add RCU library supporting quiescent state based memory reclamation
> > > > > > method.
> > > > > > > This library helps identify the quiescent state of the reader threads
> > > > > > > so that the writers can free the memory associated with the lock less
> > > > > > > data structures.
> > > > > > >
> > > > > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > > >
> > > > > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > > > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > > > > you thank me the first time you have to fix something.
> > > > > >
> > > > > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > > > > The structure visibility definitely needs to be addressed.
> > > > > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> > > > difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> > > >
> > > > Every function that is not in the direct datapath should not be inline.
> > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
> > >
> > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
> > > (just a bit more sophisticated).
> > > Konstantin
> >
> > If you look at the other userspace RCU, you wil see that the only inlines
> > are the rcu_read_lock,rcu_read_unlock and rcu_reference/rcu_assign_pointer.
> >
> > The synchronization logic is all real functions.
>
> In fact, I think urcu provides both flavors:
> https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h
> I still don't understand why we have to treat it differently then let say spin-lock/ticket-lock or rwlock.
> If we gone all the way to create our own version of rcu, we probably want it to be as fast as possible
> (I know that main speedup should come from the fact that readers don't have to wait for writer to finish, but still...)
>
> Konstantin
>
Having locking functions inline is already a problem in current releases.
The implementation can not be improved without breaking ABI (or doing special
workarounds like lock v2)
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 21:26 ` Stephen Hemminger
@ 2019-04-15 21:26 ` Stephen Hemminger
2019-04-16 5:29 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-15 21:26 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Honnappa Nagarahalli, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Mon, 15 Apr 2019 17:39:07 +0000
"Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Monday, April 15, 2019 4:39 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; paulmck@linux.ibm.com; Kovacevic, Marko
> > <marko.kovacevic@intel.com>; dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> > <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> >
> > On Mon, 15 Apr 2019 12:24:47 +0000
> > "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> >
> > > > -----Original Message-----
> > > > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > > > Sent: Saturday, April 13, 2019 12:06 AM
> > > > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>;
> > > > dev@dpdk.org; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika
> > Gupta
> > > > <Malvika.Gupta@arm.com>; nd <nd@arm.com>
> > > > Subject: Re: [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
> > > >
> > > > On Fri, 12 Apr 2019 22:24:45 +0000
> > > > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > > >
> > > > > >
> > > > > > On Fri, 12 Apr 2019 15:20:37 -0500
> > > > > > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> wrote:
> > > > > >
> > > > > > > Add RCU library supporting quiescent state based memory reclamation
> > > > > > method.
> > > > > > > This library helps identify the quiescent state of the reader threads
> > > > > > > so that the writers can free the memory associated with the lock less
> > > > > > > data structures.
> > > > > > >
> > > > > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > > > >
> > > > > > After evaluating long term API/ABI issues, I think you need to get rid of almost
> > > > > > all use of inline and visible structures. Yes it might be marginally slower, but
> > > > > > you thank me the first time you have to fix something.
> > > > > >
> > > > > Agree, I was planning on another version to address this (I am yet to take a look at your patch addressing the ABI).
> > > > > The structure visibility definitely needs to be addressed.
> > > > > For the inline functions, is the plan to convert all the inline functions in DPDK? If yes, I think we need to consider the performance
> > > > difference. May be consider L3-fwd application, change all the inline functions in its path and run a test?
> > > >
> > > > Every function that is not in the direct datapath should not be inline.
> > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and packet alloc/free
> > >
> > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > I think rcu should be one of such exceptions - it is just another synchronization mechanism after all
> > > (just a bit more sophisticated).
> > > Konstantin
> >
> > If you look at the other userspace RCU, you wil see that the only inlines
> > are the rcu_read_lock,rcu_read_unlock and rcu_reference/rcu_assign_pointer.
> >
> > The synchronization logic is all real functions.
>
> In fact, I think urcu provides both flavors:
> https://github.com/urcu/userspace-rcu/blob/master/include/urcu/static/urcu-qsbr.h
> I still don't understand why we have to treat it differently then let say spin-lock/ticket-lock or rwlock.
> If we gone all the way to create our own version of rcu, we probably want it to be as fast as possible
> (I know that main speedup should come from the fact that readers don't have to wait for writer to finish, but still...)
>
> Konstantin
>
Having locking functions inline is already a problem in current releases.
The implementation can not be improved without breaking ABI (or doing special
workarounds like lock v2)
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-15 17:29 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
2019-04-15 17:29 ` Ananyev, Konstantin
@ 2019-04-16 5:10 ` Honnappa Nagarahalli
2019-04-16 5:10 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-16 5:10 UTC (permalink / raw)
To: Ananyev, Konstantin, stephen, paulmck, Kovacevic, Marko, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd
<snip>
>
>
> Just to let you know - observed some failures with it for meson.
> Fixed it locally by:
>
> diff --git a/app/test/meson.build b/app/test/meson.build index
> 1a2ee18a5..e3e566bce 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -138,7 +138,7 @@ test_deps = ['acl',
> 'reorder',
> 'ring',
> 'stack',
> - 'timer'
> + 'timer',
> 'rcu'
> ]
>
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build index
> c009ae4b7..0c2d5a2e0 100644
> --- a/lib/librte_rcu/meson.build
> +++ b/lib/librte_rcu/meson.build
> @@ -1,5 +1,7 @@
> # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm Limited
>
> +allow_experimental_apis = true
> +
> sources = files('rte_rcu_qsbr.c')
> headers = files('rte_rcu_qsbr.h')
Thank you. I was able to produce the error and these changes fix them, will include in next version.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-16 5:10 ` Honnappa Nagarahalli
@ 2019-04-16 5:10 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-16 5:10 UTC (permalink / raw)
To: Ananyev, Konstantin, stephen, paulmck, Kovacevic, Marko, dev
Cc: Gavin Hu (Arm Technology China), Dharmik Thakkar, Malvika Gupta, nd
<snip>
>
>
> Just to let you know - observed some failures with it for meson.
> Fixed it locally by:
>
> diff --git a/app/test/meson.build b/app/test/meson.build index
> 1a2ee18a5..e3e566bce 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -138,7 +138,7 @@ test_deps = ['acl',
> 'reorder',
> 'ring',
> 'stack',
> - 'timer'
> + 'timer',
> 'rcu'
> ]
>
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build index
> c009ae4b7..0c2d5a2e0 100644
> --- a/lib/librte_rcu/meson.build
> +++ b/lib/librte_rcu/meson.build
> @@ -1,5 +1,7 @@
> # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Arm Limited
>
> +allow_experimental_apis = true
> +
> sources = files('rte_rcu_qsbr.c')
> headers = files('rte_rcu_qsbr.h')
Thank you. I was able to produce the error and these changes fix them, will include in next version.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-15 21:26 ` Stephen Hemminger
2019-04-15 21:26 ` Stephen Hemminger
@ 2019-04-16 5:29 ` Honnappa Nagarahalli
2019-04-16 5:29 ` Honnappa Nagarahalli
2019-04-16 14:54 ` Stephen Hemminger
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-16 5:29 UTC (permalink / raw)
To: Stephen Hemminger, Ananyev, Konstantin
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > >
> > > > > > > > Add RCU library supporting quiescent state based memory
> > > > > > > > reclamation
> > > > > > > method.
> > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > associated with the lock less data structures.
> > > > > > > >
> > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > >
> > > > > > > After evaluating long term API/ABI issues, I think you need
> > > > > > > to get rid of almost all use of inline and visible
> > > > > > > structures. Yes it might be marginally slower, but you thank me
> the first time you have to fix something.
> > > > > > >
> > > > > > Agree, I was planning on another version to address this (I am yet
> to take a look at your patch addressing the ABI).
> > > > > > The structure visibility definitely needs to be addressed.
> > > > > > For the inline functions, is the plan to convert all the
> > > > > > inline functions in DPDK? If yes, I think we need to consider
> > > > > > the performance
> > > > > difference. May be consider L3-fwd application, change all the
> inline functions in its path and run a test?
> > > > >
> > > > > Every function that is not in the direct datapath should not be
> inline.
> > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and
> > > > > packet alloc/free
I do not understand how DPDK can claim ABI compatibility if we have inline functions (unless we freeze any development in these inline functions forever).
> > > >
> > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > I think rcu should be one of such exceptions - it is just another
> > > > synchronization mechanism after all (just a bit more sophisticated).
> > > > Konstantin
> > >
> > > If you look at the other userspace RCU, you wil see that the only
> > > inlines are the rcu_read_lock,rcu_read_unlock and
> rcu_reference/rcu_assign_pointer.
> > >
> > > The synchronization logic is all real functions.
> >
> > In fact, I think urcu provides both flavors:
> > https://github.com/urcu/userspace-
> rcu/blob/master/include/urcu/static/
> > urcu-qsbr.h I still don't understand why we have to treat it
> > differently then let say spin-lock/ticket-lock or rwlock.
> > If we gone all the way to create our own version of rcu, we probably
> > want it to be as fast as possible (I know that main speedup should
> > come from the fact that readers don't have to wait for writer to
> > finish, but still...)
> >
> > Konstantin
> >
>
> Having locking functions inline is already a problem in current releases.
> The implementation can not be improved without breaking ABI (or doing
> special workarounds like lock v2)
I think ABI and inline function discussion needs to be taken up in a different thread.
Currently, I am looking to hide the structure visibility. I looked at your patch [1], it is a different case than what I have in this patch. It is a pretty generic use case as well (similar situation exists in other libraries). I think a generic solution should be agreed upon.
If we have to hide the structure content, the handle to QS variable returned to the application needs to be opaque. I suggest using 'void *' behind which any structure can be used.
typedef void * rte_rcu_qsbr_t;
typedef void * rte_hash_t;
But it requires typecasting.
[1] http://patchwork.dpdk.org/cover/52609/
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 5:29 ` Honnappa Nagarahalli
@ 2019-04-16 5:29 ` Honnappa Nagarahalli
2019-04-16 14:54 ` Stephen Hemminger
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-16 5:29 UTC (permalink / raw)
To: Stephen Hemminger, Ananyev, Konstantin
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > >
> > > > > > > > Add RCU library supporting quiescent state based memory
> > > > > > > > reclamation
> > > > > > > method.
> > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > associated with the lock less data structures.
> > > > > > > >
> > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > >
> > > > > > > After evaluating long term API/ABI issues, I think you need
> > > > > > > to get rid of almost all use of inline and visible
> > > > > > > structures. Yes it might be marginally slower, but you thank me
> the first time you have to fix something.
> > > > > > >
> > > > > > Agree, I was planning on another version to address this (I am yet
> to take a look at your patch addressing the ABI).
> > > > > > The structure visibility definitely needs to be addressed.
> > > > > > For the inline functions, is the plan to convert all the
> > > > > > inline functions in DPDK? If yes, I think we need to consider
> > > > > > the performance
> > > > > difference. May be consider L3-fwd application, change all the
> inline functions in its path and run a test?
> > > > >
> > > > > Every function that is not in the direct datapath should not be
> inline.
> > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and
> > > > > packet alloc/free
I do not understand how DPDK can claim ABI compatibility if we have inline functions (unless we freeze any development in these inline functions forever).
> > > >
> > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > I think rcu should be one of such exceptions - it is just another
> > > > synchronization mechanism after all (just a bit more sophisticated).
> > > > Konstantin
> > >
> > > If you look at the other userspace RCU, you wil see that the only
> > > inlines are the rcu_read_lock,rcu_read_unlock and
> rcu_reference/rcu_assign_pointer.
> > >
> > > The synchronization logic is all real functions.
> >
> > In fact, I think urcu provides both flavors:
> > https://github.com/urcu/userspace-
> rcu/blob/master/include/urcu/static/
> > urcu-qsbr.h I still don't understand why we have to treat it
> > differently then let say spin-lock/ticket-lock or rwlock.
> > If we gone all the way to create our own version of rcu, we probably
> > want it to be as fast as possible (I know that main speedup should
> > come from the fact that readers don't have to wait for writer to
> > finish, but still...)
> >
> > Konstantin
> >
>
> Having locking functions inline is already a problem in current releases.
> The implementation can not be improved without breaking ABI (or doing
> special workarounds like lock v2)
I think ABI and inline function discussion needs to be taken up in a different thread.
Currently, I am looking to hide the structure visibility. I looked at your patch [1], it is a different case than what I have in this patch. It is a pretty generic use case as well (similar situation exists in other libraries). I think a generic solution should be agreed upon.
If we have to hide the structure content, the handle to QS variable returned to the application needs to be opaque. I suggest using 'void *' behind which any structure can be used.
typedef void * rte_rcu_qsbr_t;
typedef void * rte_hash_t;
But it requires typecasting.
[1] http://patchwork.dpdk.org/cover/52609/
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 5:29 ` Honnappa Nagarahalli
2019-04-16 5:29 ` Honnappa Nagarahalli
@ 2019-04-16 14:54 ` Stephen Hemminger
2019-04-16 14:54 ` Stephen Hemminger
2019-04-16 16:56 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-16 14:54 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Tue, 16 Apr 2019 05:29:21 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > > >
> > > > > > > > > Add RCU library supporting quiescent state based memory
> > > > > > > > > reclamation
> > > > > > > > method.
> > > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > > associated with the lock less data structures.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > >
> > > > > > > > After evaluating long term API/ABI issues, I think you need
> > > > > > > > to get rid of almost all use of inline and visible
> > > > > > > > structures. Yes it might be marginally slower, but you thank me
> > the first time you have to fix something.
> > > > > > > >
> > > > > > > Agree, I was planning on another version to address this (I am yet
> > to take a look at your patch addressing the ABI).
> > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > For the inline functions, is the plan to convert all the
> > > > > > > inline functions in DPDK? If yes, I think we need to consider
> > > > > > > the performance
> > > > > > difference. May be consider L3-fwd application, change all the
> > inline functions in its path and run a test?
> > > > > >
> > > > > > Every function that is not in the direct datapath should not be
> > inline.
> > > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and
> > > > > > packet alloc/free
> I do not understand how DPDK can claim ABI compatibility if we have inline functions (unless we freeze any development in these inline functions forever).
>
> > > > >
> > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > I think rcu should be one of such exceptions - it is just another
> > > > > synchronization mechanism after all (just a bit more sophisticated).
> > > > > Konstantin
> > > >
> > > > If you look at the other userspace RCU, you wil see that the only
> > > > inlines are the rcu_read_lock,rcu_read_unlock and
> > rcu_reference/rcu_assign_pointer.
> > > >
> > > > The synchronization logic is all real functions.
> > >
> > > In fact, I think urcu provides both flavors:
> > > https://github.com/urcu/userspace-
> > rcu/blob/master/include/urcu/static/
> > > urcu-qsbr.h I still don't understand why we have to treat it
> > > differently then let say spin-lock/ticket-lock or rwlock.
> > > If we gone all the way to create our own version of rcu, we probably
> > > want it to be as fast as possible (I know that main speedup should
> > > come from the fact that readers don't have to wait for writer to
> > > finish, but still...)
> > >
> > > Konstantin
> > >
> >
> > Having locking functions inline is already a problem in current releases.
> > The implementation can not be improved without breaking ABI (or doing
> > special workarounds like lock v2)
> I think ABI and inline function discussion needs to be taken up in a different thread.
>
> Currently, I am looking to hide the structure visibility. I looked at your patch [1], it is a different case than what I have in this patch. It is a pretty generic use case as well (similar situation exists in other libraries). I think a generic solution should be agreed upon.
>
> If we have to hide the structure content, the handle to QS variable returned to the application needs to be opaque. I suggest using 'void *' behind which any structure can be used.
>
> typedef void * rte_rcu_qsbr_t;
> typedef void * rte_hash_t;
>
> But it requires typecasting.
>
> [1] http://patchwork.dpdk.org/cover/52609/
C allows structure to be defined without knowing what is in it
therefore.
typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
is preferred (or do it without typedef)
struct rte_rcu_qsbr;
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 14:54 ` Stephen Hemminger
@ 2019-04-16 14:54 ` Stephen Hemminger
2019-04-16 16:56 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-16 14:54 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Tue, 16 Apr 2019 05:29:21 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> > > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > > >
> > > > > > > > > Add RCU library supporting quiescent state based memory
> > > > > > > > > reclamation
> > > > > > > > method.
> > > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > > associated with the lock less data structures.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > >
> > > > > > > > After evaluating long term API/ABI issues, I think you need
> > > > > > > > to get rid of almost all use of inline and visible
> > > > > > > > structures. Yes it might be marginally slower, but you thank me
> > the first time you have to fix something.
> > > > > > > >
> > > > > > > Agree, I was planning on another version to address this (I am yet
> > to take a look at your patch addressing the ABI).
> > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > For the inline functions, is the plan to convert all the
> > > > > > > inline functions in DPDK? If yes, I think we need to consider
> > > > > > > the performance
> > > > > > difference. May be consider L3-fwd application, change all the
> > inline functions in its path and run a test?
> > > > > >
> > > > > > Every function that is not in the direct datapath should not be
> > inline.
> > > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue, and
> > > > > > packet alloc/free
> I do not understand how DPDK can claim ABI compatibility if we have inline functions (unless we freeze any development in these inline functions forever).
>
> > > > >
> > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > I think rcu should be one of such exceptions - it is just another
> > > > > synchronization mechanism after all (just a bit more sophisticated).
> > > > > Konstantin
> > > >
> > > > If you look at the other userspace RCU, you wil see that the only
> > > > inlines are the rcu_read_lock,rcu_read_unlock and
> > rcu_reference/rcu_assign_pointer.
> > > >
> > > > The synchronization logic is all real functions.
> > >
> > > In fact, I think urcu provides both flavors:
> > > https://github.com/urcu/userspace-
> > rcu/blob/master/include/urcu/static/
> > > urcu-qsbr.h I still don't understand why we have to treat it
> > > differently then let say spin-lock/ticket-lock or rwlock.
> > > If we gone all the way to create our own version of rcu, we probably
> > > want it to be as fast as possible (I know that main speedup should
> > > come from the fact that readers don't have to wait for writer to
> > > finish, but still...)
> > >
> > > Konstantin
> > >
> >
> > Having locking functions inline is already a problem in current releases.
> > The implementation can not be improved without breaking ABI (or doing
> > special workarounds like lock v2)
> I think ABI and inline function discussion needs to be taken up in a different thread.
>
> Currently, I am looking to hide the structure visibility. I looked at your patch [1], it is a different case than what I have in this patch. It is a pretty generic use case as well (similar situation exists in other libraries). I think a generic solution should be agreed upon.
>
> If we have to hide the structure content, the handle to QS variable returned to the application needs to be opaque. I suggest using 'void *' behind which any structure can be used.
>
> typedef void * rte_rcu_qsbr_t;
> typedef void * rte_hash_t;
>
> But it requires typecasting.
>
> [1] http://patchwork.dpdk.org/cover/52609/
C allows structure to be defined without knowing what is in it
therefore.
typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
is preferred (or do it without typedef)
struct rte_rcu_qsbr;
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 14:54 ` Stephen Hemminger
2019-04-16 14:54 ` Stephen Hemminger
@ 2019-04-16 16:56 ` Honnappa Nagarahalli
2019-04-16 16:56 ` Honnappa Nagarahalli
2019-04-16 21:22 ` Stephen Hemminger
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-16 16:56 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
>
> > > > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > > > >
> > > > > > > > > > Add RCU library supporting quiescent state based
> > > > > > > > > > memory reclamation
> > > > > > > > > method.
> > > > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > > > associated with the lock less data structures.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > >
> > > > > > > > > After evaluating long term API/ABI issues, I think you
> > > > > > > > > need to get rid of almost all use of inline and visible
> > > > > > > > > structures. Yes it might be marginally slower, but you
> > > > > > > > > thank me
> > > the first time you have to fix something.
> > > > > > > > >
> > > > > > > > Agree, I was planning on another version to address this
> > > > > > > > (I am yet
> > > to take a look at your patch addressing the ABI).
> > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > For the inline functions, is the plan to convert all the
> > > > > > > > inline functions in DPDK? If yes, I think we need to
> > > > > > > > consider the performance
> > > > > > > difference. May be consider L3-fwd application, change all
> > > > > > > the
> > > inline functions in its path and run a test?
> > > > > > >
> > > > > > > Every function that is not in the direct datapath should not
> > > > > > > be
> > > inline.
> > > > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue,
> > > > > > > and packet alloc/free
> > I do not understand how DPDK can claim ABI compatibility if we have
> inline functions (unless we freeze any development in these inline functions
> forever).
> >
> > > > > >
> > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > another synchronization mechanism after all (just a bit more
> sophisticated).
> > > > > > Konstantin
> > > > >
> > > > > If you look at the other userspace RCU, you wil see that the
> > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > rcu_reference/rcu_assign_pointer.
> > > > >
> > > > > The synchronization logic is all real functions.
> > > >
> > > > In fact, I think urcu provides both flavors:
> > > > https://github.com/urcu/userspace-
> > > rcu/blob/master/include/urcu/static/
> > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > If we gone all the way to create our own version of rcu, we
> > > > probably want it to be as fast as possible (I know that main
> > > > speedup should come from the fact that readers don't have to wait
> > > > for writer to finish, but still...)
> > > >
> > > > Konstantin
> > > >
> > >
> > > Having locking functions inline is already a problem in current releases.
> > > The implementation can not be improved without breaking ABI (or
> > > doing special workarounds like lock v2)
> > I think ABI and inline function discussion needs to be taken up in a
> different thread.
> >
> > Currently, I am looking to hide the structure visibility. I looked at your
> patch [1], it is a different case than what I have in this patch. It is a pretty
> generic use case as well (similar situation exists in other libraries). I think a
> generic solution should be agreed upon.
> >
> > If we have to hide the structure content, the handle to QS variable
> returned to the application needs to be opaque. I suggest using 'void *'
> behind which any structure can be used.
> >
> > typedef void * rte_rcu_qsbr_t;
> > typedef void * rte_hash_t;
> >
> > But it requires typecasting.
> >
> > [1] http://patchwork.dpdk.org/cover/52609/
>
> C allows structure to be defined without knowing what is in it therefore.
>
> typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
>
> is preferred (or do it without typedef)
>
> struct rte_rcu_qsbr;
I see that rte_hash library uses the same approach (struct rte_hash in rte_hash.h, though it is marking as internal). But the ABI Laboratory tool [1] seems to be reporting incorrect numbers for this library even though the internal structure is changed.
[1] https://abi-laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.02&v2=current&obj=66794&kind=abi
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 16:56 ` Honnappa Nagarahalli
@ 2019-04-16 16:56 ` Honnappa Nagarahalli
2019-04-16 21:22 ` Stephen Hemminger
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-16 16:56 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
>
> > > > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > > > >
> > > > > > > > > > Add RCU library supporting quiescent state based
> > > > > > > > > > memory reclamation
> > > > > > > > > method.
> > > > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > > > associated with the lock less data structures.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > >
> > > > > > > > > After evaluating long term API/ABI issues, I think you
> > > > > > > > > need to get rid of almost all use of inline and visible
> > > > > > > > > structures. Yes it might be marginally slower, but you
> > > > > > > > > thank me
> > > the first time you have to fix something.
> > > > > > > > >
> > > > > > > > Agree, I was planning on another version to address this
> > > > > > > > (I am yet
> > > to take a look at your patch addressing the ABI).
> > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > For the inline functions, is the plan to convert all the
> > > > > > > > inline functions in DPDK? If yes, I think we need to
> > > > > > > > consider the performance
> > > > > > > difference. May be consider L3-fwd application, change all
> > > > > > > the
> > > inline functions in its path and run a test?
> > > > > > >
> > > > > > > Every function that is not in the direct datapath should not
> > > > > > > be
> > > inline.
> > > > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue,
> > > > > > > and packet alloc/free
> > I do not understand how DPDK can claim ABI compatibility if we have
> inline functions (unless we freeze any development in these inline functions
> forever).
> >
> > > > > >
> > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > another synchronization mechanism after all (just a bit more
> sophisticated).
> > > > > > Konstantin
> > > > >
> > > > > If you look at the other userspace RCU, you wil see that the
> > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > rcu_reference/rcu_assign_pointer.
> > > > >
> > > > > The synchronization logic is all real functions.
> > > >
> > > > In fact, I think urcu provides both flavors:
> > > > https://github.com/urcu/userspace-
> > > rcu/blob/master/include/urcu/static/
> > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > If we gone all the way to create our own version of rcu, we
> > > > probably want it to be as fast as possible (I know that main
> > > > speedup should come from the fact that readers don't have to wait
> > > > for writer to finish, but still...)
> > > >
> > > > Konstantin
> > > >
> > >
> > > Having locking functions inline is already a problem in current releases.
> > > The implementation can not be improved without breaking ABI (or
> > > doing special workarounds like lock v2)
> > I think ABI and inline function discussion needs to be taken up in a
> different thread.
> >
> > Currently, I am looking to hide the structure visibility. I looked at your
> patch [1], it is a different case than what I have in this patch. It is a pretty
> generic use case as well (similar situation exists in other libraries). I think a
> generic solution should be agreed upon.
> >
> > If we have to hide the structure content, the handle to QS variable
> returned to the application needs to be opaque. I suggest using 'void *'
> behind which any structure can be used.
> >
> > typedef void * rte_rcu_qsbr_t;
> > typedef void * rte_hash_t;
> >
> > But it requires typecasting.
> >
> > [1] http://patchwork.dpdk.org/cover/52609/
>
> C allows structure to be defined without knowing what is in it therefore.
>
> typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
>
> is preferred (or do it without typedef)
>
> struct rte_rcu_qsbr;
I see that rte_hash library uses the same approach (struct rte_hash in rte_hash.h, though it is marking as internal). But the ABI Laboratory tool [1] seems to be reporting incorrect numbers for this library even though the internal structure is changed.
[1] https://abi-laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.02&v2=current&obj=66794&kind=abi
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 16:56 ` Honnappa Nagarahalli
2019-04-16 16:56 ` Honnappa Nagarahalli
@ 2019-04-16 21:22 ` Stephen Hemminger
2019-04-16 21:22 ` Stephen Hemminger
2019-04-17 1:45 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-16 21:22 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Tue, 16 Apr 2019 16:56:32 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > > > > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > > > > >
> > > > > > > > > > > Add RCU library supporting quiescent state based
> > > > > > > > > > > memory reclamation
> > > > > > > > > > method.
> > > > > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > > > > associated with the lock less data structures.
> > > > > > > > > > >
> > > > > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > >
> > > > > > > > > > After evaluating long term API/ABI issues, I think you
> > > > > > > > > > need to get rid of almost all use of inline and visible
> > > > > > > > > > structures. Yes it might be marginally slower, but you
> > > > > > > > > > thank me
> > > > the first time you have to fix something.
> > > > > > > > > >
> > > > > > > > > Agree, I was planning on another version to address this
> > > > > > > > > (I am yet
> > > > to take a look at your patch addressing the ABI).
> > > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > > For the inline functions, is the plan to convert all the
> > > > > > > > > inline functions in DPDK? If yes, I think we need to
> > > > > > > > > consider the performance
> > > > > > > > difference. May be consider L3-fwd application, change all
> > > > > > > > the
> > > > inline functions in its path and run a test?
> > > > > > > >
> > > > > > > > Every function that is not in the direct datapath should not
> > > > > > > > be
> > > > inline.
> > > > > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue,
> > > > > > > > and packet alloc/free
> > > I do not understand how DPDK can claim ABI compatibility if we have
> > inline functions (unless we freeze any development in these inline functions
> > forever).
> > >
> > > > > > >
> > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > > another synchronization mechanism after all (just a bit more
> > sophisticated).
> > > > > > > Konstantin
> > > > > >
> > > > > > If you look at the other userspace RCU, you wil see that the
> > > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > > rcu_reference/rcu_assign_pointer.
> > > > > >
> > > > > > The synchronization logic is all real functions.
> > > > >
> > > > > In fact, I think urcu provides both flavors:
> > > > > https://github.com/urcu/userspace-
> > > > rcu/blob/master/include/urcu/static/
> > > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > > If we gone all the way to create our own version of rcu, we
> > > > > probably want it to be as fast as possible (I know that main
> > > > > speedup should come from the fact that readers don't have to wait
> > > > > for writer to finish, but still...)
> > > > >
> > > > > Konstantin
> > > > >
> > > >
> > > > Having locking functions inline is already a problem in current releases.
> > > > The implementation can not be improved without breaking ABI (or
> > > > doing special workarounds like lock v2)
> > > I think ABI and inline function discussion needs to be taken up in a
> > different thread.
> > >
> > > Currently, I am looking to hide the structure visibility. I looked at your
> > patch [1], it is a different case than what I have in this patch. It is a pretty
> > generic use case as well (similar situation exists in other libraries). I think a
> > generic solution should be agreed upon.
> > >
> > > If we have to hide the structure content, the handle to QS variable
> > returned to the application needs to be opaque. I suggest using 'void *'
> > behind which any structure can be used.
> > >
> > > typedef void * rte_rcu_qsbr_t;
> > > typedef void * rte_hash_t;
> > >
> > > But it requires typecasting.
> > >
> > > [1] http://patchwork.dpdk.org/cover/52609/
> >
> > C allows structure to be defined without knowing what is in it therefore.
> >
> > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> >
> > is preferred (or do it without typedef)
> >
> > struct rte_rcu_qsbr;
>
> I see that rte_hash library uses the same approach (struct rte_hash in rte_hash.h, though it is marking as internal). But the ABI Laboratory tool [1] seems to be reporting incorrect numbers for this library even though the internal structure is changed.
>
> [1] https://abi-laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.02&v2=current&obj=66794&kind=abi
The problem is rte_hash structure is exposed as part of ABI in rte_cuckoo_hash.h
This was a mistake.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 21:22 ` Stephen Hemminger
@ 2019-04-16 21:22 ` Stephen Hemminger
2019-04-17 1:45 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Stephen Hemminger @ 2019-04-16 21:22 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Tue, 16 Apr 2019 16:56:32 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> > > > > > > > > > On Fri, 12 Apr 2019 15:20:37 -0500 Honnappa Nagarahalli
> > > > > > > > > > <honnappa.nagarahalli@arm.com> wrote:
> > > > > > > > > >
> > > > > > > > > > > Add RCU library supporting quiescent state based
> > > > > > > > > > > memory reclamation
> > > > > > > > > > method.
> > > > > > > > > > > This library helps identify the quiescent state of the
> > > > > > > > > > > reader threads so that the writers can free the memory
> > > > > > > > > > > associated with the lock less data structures.
> > > > > > > > > > >
> > > > > > > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > > > > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > > > > > > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > > > > > > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > > > > > > > > Acked-by: Konstantin Ananyev
> > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > >
> > > > > > > > > > After evaluating long term API/ABI issues, I think you
> > > > > > > > > > need to get rid of almost all use of inline and visible
> > > > > > > > > > structures. Yes it might be marginally slower, but you
> > > > > > > > > > thank me
> > > > the first time you have to fix something.
> > > > > > > > > >
> > > > > > > > > Agree, I was planning on another version to address this
> > > > > > > > > (I am yet
> > > > to take a look at your patch addressing the ABI).
> > > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > > For the inline functions, is the plan to convert all the
> > > > > > > > > inline functions in DPDK? If yes, I think we need to
> > > > > > > > > consider the performance
> > > > > > > > difference. May be consider L3-fwd application, change all
> > > > > > > > the
> > > > inline functions in its path and run a test?
> > > > > > > >
> > > > > > > > Every function that is not in the direct datapath should not
> > > > > > > > be
> > > > inline.
> > > > > > > > Exceptions or things like rx/tx burst, ring enqueue/dequeue,
> > > > > > > > and packet alloc/free
> > > I do not understand how DPDK can claim ABI compatibility if we have
> > inline functions (unless we freeze any development in these inline functions
> > forever).
> > >
> > > > > > >
> > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > > another synchronization mechanism after all (just a bit more
> > sophisticated).
> > > > > > > Konstantin
> > > > > >
> > > > > > If you look at the other userspace RCU, you wil see that the
> > > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > > rcu_reference/rcu_assign_pointer.
> > > > > >
> > > > > > The synchronization logic is all real functions.
> > > > >
> > > > > In fact, I think urcu provides both flavors:
> > > > > https://github.com/urcu/userspace-
> > > > rcu/blob/master/include/urcu/static/
> > > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > > If we gone all the way to create our own version of rcu, we
> > > > > probably want it to be as fast as possible (I know that main
> > > > > speedup should come from the fact that readers don't have to wait
> > > > > for writer to finish, but still...)
> > > > >
> > > > > Konstantin
> > > > >
> > > >
> > > > Having locking functions inline is already a problem in current releases.
> > > > The implementation can not be improved without breaking ABI (or
> > > > doing special workarounds like lock v2)
> > > I think ABI and inline function discussion needs to be taken up in a
> > different thread.
> > >
> > > Currently, I am looking to hide the structure visibility. I looked at your
> > patch [1], it is a different case than what I have in this patch. It is a pretty
> > generic use case as well (similar situation exists in other libraries). I think a
> > generic solution should be agreed upon.
> > >
> > > If we have to hide the structure content, the handle to QS variable
> > returned to the application needs to be opaque. I suggest using 'void *'
> > behind which any structure can be used.
> > >
> > > typedef void * rte_rcu_qsbr_t;
> > > typedef void * rte_hash_t;
> > >
> > > But it requires typecasting.
> > >
> > > [1] http://patchwork.dpdk.org/cover/52609/
> >
> > C allows structure to be defined without knowing what is in it therefore.
> >
> > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> >
> > is preferred (or do it without typedef)
> >
> > struct rte_rcu_qsbr;
>
> I see that rte_hash library uses the same approach (struct rte_hash in rte_hash.h, though it is marking as internal). But the ABI Laboratory tool [1] seems to be reporting incorrect numbers for this library even though the internal structure is changed.
>
> [1] https://abi-laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.02&v2=current&obj=66794&kind=abi
The problem is rte_hash structure is exposed as part of ABI in rte_cuckoo_hash.h
This was a mistake.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-16 21:22 ` Stephen Hemminger
2019-04-16 21:22 ` Stephen Hemminger
@ 2019-04-17 1:45 ` Honnappa Nagarahalli
2019-04-17 1:45 ` Honnappa Nagarahalli
2019-04-17 13:39 ` Ananyev, Konstantin
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 1:45 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > > > > > > >
> > > > > > > > > > > After evaluating long term API/ABI issues, I think
> > > > > > > > > > > you need to get rid of almost all use of inline and
> > > > > > > > > > > visible structures. Yes it might be marginally
> > > > > > > > > > > slower, but you thank me
> > > > > the first time you have to fix something.
> > > > > > > > > > >
> > > > > > > > > > Agree, I was planning on another version to address
> > > > > > > > > > this (I am yet
> > > > > to take a look at your patch addressing the ABI).
> > > > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > > > For the inline functions, is the plan to convert all
> > > > > > > > > > the inline functions in DPDK? If yes, I think we need
> > > > > > > > > > to consider the performance
> > > > > > > > > difference. May be consider L3-fwd application, change
> > > > > > > > > all the
> > > > > inline functions in its path and run a test?
> > > > > > > > >
> > > > > > > > > Every function that is not in the direct datapath should
> > > > > > > > > not be
> > > > > inline.
> > > > > > > > > Exceptions or things like rx/tx burst, ring
> > > > > > > > > enqueue/dequeue, and packet alloc/free
> > > > I do not understand how DPDK can claim ABI compatibility if we
> > > > have
> > > inline functions (unless we freeze any development in these inline
> > > functions forever).
> > > >
> > > > > > > >
> > > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > > > another synchronization mechanism after all (just a bit
> > > > > > > > more
> > > sophisticated).
> > > > > > > > Konstantin
> > > > > > >
> > > > > > > If you look at the other userspace RCU, you wil see that the
> > > > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > > > rcu_reference/rcu_assign_pointer.
> > > > > > >
> > > > > > > The synchronization logic is all real functions.
> > > > > >
> > > > > > In fact, I think urcu provides both flavors:
> > > > > > https://github.com/urcu/userspace-
> > > > > rcu/blob/master/include/urcu/static/
> > > > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > > > If we gone all the way to create our own version of rcu, we
> > > > > > probably want it to be as fast as possible (I know that main
> > > > > > speedup should come from the fact that readers don't have to
> > > > > > wait for writer to finish, but still...)
> > > > > >
> > > > > > Konstantin
> > > > > >
> > > > >
> > > > > Having locking functions inline is already a problem in current
> releases.
> > > > > The implementation can not be improved without breaking ABI (or
> > > > > doing special workarounds like lock v2)
> > > > I think ABI and inline function discussion needs to be taken up in
> > > > a
> > > different thread.
> > > >
> > > > Currently, I am looking to hide the structure visibility. I looked
> > > > at your
> > > patch [1], it is a different case than what I have in this patch. It
> > > is a pretty generic use case as well (similar situation exists in
> > > other libraries). I think a generic solution should be agreed upon.
> > > >
> > > > If we have to hide the structure content, the handle to QS
> > > > variable
> > > returned to the application needs to be opaque. I suggest using 'void *'
> > > behind which any structure can be used.
> > > >
> > > > typedef void * rte_rcu_qsbr_t;
> > > > typedef void * rte_hash_t;
> > > >
> > > > But it requires typecasting.
> > > >
> > > > [1] http://patchwork.dpdk.org/cover/52609/
> > >
> > > C allows structure to be defined without knowing what is in it
> therefore.
> > >
> > > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> > >
> > > is preferred (or do it without typedef)
> > >
> > > struct rte_rcu_qsbr;
> >
> > I see that rte_hash library uses the same approach (struct rte_hash in
> rte_hash.h, though it is marking as internal). But the ABI Laboratory tool
> [1] seems to be reporting incorrect numbers for this library even though
> the internal structure is changed.
> >
> > [1]
> > https://abi-
> laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.0
> > 2&v2=current&obj=66794&kind=abi
>
> The problem is rte_hash structure is exposed as part of ABI in
> rte_cuckoo_hash.h This was a mistake.
Do you mean, due to the use of structure with the same name? I am wondering if it is just a tools issue. The application is not supposed to include rte_cuckoo_hash.h.
For the RCU library, we either need to go all functions or leave it the way it is. I do not see a point in trying to hide the internal structure while having inline functions.
I converted the inline functions to function calls.
Testing on Arm platform (results *are* repeatable) shows very minimal drop (0.1% to 0.2%) in performance while using lock-free rte_hash data structure. But one of the test cases which is just spinning shows good amount of drop (41%).
Testing on x86 (Xeon Gold 6132 CPU @ 2.60GHz, results *are* pretty repeatable) shows performance improvements (7% to 8%) while using lock-free rte_hash data structure. The test cases which is just spinning show significant drop (14%, 155%, 231%).
Konstantin, any thoughts on the results?
I will send out V6 which will fix issues reported so far. The function vs inline part is still open, need to close it soon.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 1:45 ` Honnappa Nagarahalli
@ 2019-04-17 1:45 ` Honnappa Nagarahalli
2019-04-17 13:39 ` Ananyev, Konstantin
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 1:45 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Ananyev, Konstantin, paulmck, Kovacevic, Marko, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > > > > > > >
> > > > > > > > > > > After evaluating long term API/ABI issues, I think
> > > > > > > > > > > you need to get rid of almost all use of inline and
> > > > > > > > > > > visible structures. Yes it might be marginally
> > > > > > > > > > > slower, but you thank me
> > > > > the first time you have to fix something.
> > > > > > > > > > >
> > > > > > > > > > Agree, I was planning on another version to address
> > > > > > > > > > this (I am yet
> > > > > to take a look at your patch addressing the ABI).
> > > > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > > > For the inline functions, is the plan to convert all
> > > > > > > > > > the inline functions in DPDK? If yes, I think we need
> > > > > > > > > > to consider the performance
> > > > > > > > > difference. May be consider L3-fwd application, change
> > > > > > > > > all the
> > > > > inline functions in its path and run a test?
> > > > > > > > >
> > > > > > > > > Every function that is not in the direct datapath should
> > > > > > > > > not be
> > > > > inline.
> > > > > > > > > Exceptions or things like rx/tx burst, ring
> > > > > > > > > enqueue/dequeue, and packet alloc/free
> > > > I do not understand how DPDK can claim ABI compatibility if we
> > > > have
> > > inline functions (unless we freeze any development in these inline
> > > functions forever).
> > > >
> > > > > > > >
> > > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > > > another synchronization mechanism after all (just a bit
> > > > > > > > more
> > > sophisticated).
> > > > > > > > Konstantin
> > > > > > >
> > > > > > > If you look at the other userspace RCU, you wil see that the
> > > > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > > > rcu_reference/rcu_assign_pointer.
> > > > > > >
> > > > > > > The synchronization logic is all real functions.
> > > > > >
> > > > > > In fact, I think urcu provides both flavors:
> > > > > > https://github.com/urcu/userspace-
> > > > > rcu/blob/master/include/urcu/static/
> > > > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > > > If we gone all the way to create our own version of rcu, we
> > > > > > probably want it to be as fast as possible (I know that main
> > > > > > speedup should come from the fact that readers don't have to
> > > > > > wait for writer to finish, but still...)
> > > > > >
> > > > > > Konstantin
> > > > > >
> > > > >
> > > > > Having locking functions inline is already a problem in current
> releases.
> > > > > The implementation can not be improved without breaking ABI (or
> > > > > doing special workarounds like lock v2)
> > > > I think ABI and inline function discussion needs to be taken up in
> > > > a
> > > different thread.
> > > >
> > > > Currently, I am looking to hide the structure visibility. I looked
> > > > at your
> > > patch [1], it is a different case than what I have in this patch. It
> > > is a pretty generic use case as well (similar situation exists in
> > > other libraries). I think a generic solution should be agreed upon.
> > > >
> > > > If we have to hide the structure content, the handle to QS
> > > > variable
> > > returned to the application needs to be opaque. I suggest using 'void *'
> > > behind which any structure can be used.
> > > >
> > > > typedef void * rte_rcu_qsbr_t;
> > > > typedef void * rte_hash_t;
> > > >
> > > > But it requires typecasting.
> > > >
> > > > [1] http://patchwork.dpdk.org/cover/52609/
> > >
> > > C allows structure to be defined without knowing what is in it
> therefore.
> > >
> > > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> > >
> > > is preferred (or do it without typedef)
> > >
> > > struct rte_rcu_qsbr;
> >
> > I see that rte_hash library uses the same approach (struct rte_hash in
> rte_hash.h, though it is marking as internal). But the ABI Laboratory tool
> [1] seems to be reporting incorrect numbers for this library even though
> the internal structure is changed.
> >
> > [1]
> > https://abi-
> laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.0
> > 2&v2=current&obj=66794&kind=abi
>
> The problem is rte_hash structure is exposed as part of ABI in
> rte_cuckoo_hash.h This was a mistake.
Do you mean, due to the use of structure with the same name? I am wondering if it is just a tools issue. The application is not supposed to include rte_cuckoo_hash.h.
For the RCU library, we either need to go all functions or leave it the way it is. I do not see a point in trying to hide the internal structure while having inline functions.
I converted the inline functions to function calls.
Testing on Arm platform (results *are* repeatable) shows very minimal drop (0.1% to 0.2%) in performance while using lock-free rte_hash data structure. But one of the test cases which is just spinning shows good amount of drop (41%).
Testing on x86 (Xeon Gold 6132 CPU @ 2.60GHz, results *are* pretty repeatable) shows performance improvements (7% to 8%) while using lock-free rte_hash data structure. The test cases which is just spinning show significant drop (14%, 155%, 231%).
Konstantin, any thoughts on the results?
I will send out V6 which will fix issues reported so far. The function vs inline part is still open, need to close it soon.
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (10 preceding siblings ...)
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
` (4 more replies)
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 " Honnappa Nagarahalli
` (2 subsequent siblings)
14 siblings, 5 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 257 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 629 ++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3378 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 1/3] rcu: " Honnappa Nagarahalli
` (3 subsequent siblings)
4 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 257 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 629 ++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3378 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-19 19:19 ` Paul E. McKenney
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
` (2 subsequent siblings)
4 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 257 ++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 629 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 943 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index a08583471..ae54f37db 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1274,6 +1274,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 7fb0dedb6..f50d26c30 100644
--- a/config/common_base
+++ b/config/common_base
@@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..466592a42
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..73fa3354e
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,629 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_WARNING, rcu_log_type,
+ "%s(): Lock counter %u. Nested locks?\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Validate that the lock counter is 0 */
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Lock counter %u, should be 0\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..5ea8524db
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,12 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index abea16d48..ebe6d48a7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-19 19:19 ` Paul E. McKenney
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 257 ++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 629 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 943 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index a08583471..ae54f37db 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1274,6 +1274,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 7fb0dedb6..f50d26c30 100644
--- a/config/common_base
+++ b/config/common_base
@@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..466592a42
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu\n", t,
+ __atomic_load_n(
+ &v->qsbr_cnt[i].cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..73fa3354e
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,629 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_WARNING, rcu_log_type,
+ "%s(): Lock counter %u. Nested locks?\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Validate that the lock counter is 0 */
+ if (v->qsbr_cnt[thread_id].lock_cnt)
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Lock counter %u, should be 0\n",
+ __func__, v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..5ea8524db
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,12 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index abea16d48..ebe6d48a7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 2/3] test/rcu_qsbr: add API and functional tests
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-21 16:40 ` [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism Thomas Monjalon
4 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 +++++++++++++++++++++++
5 files changed, 1737 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..e3e566bce 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -136,7 +138,8 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..b16872de5
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..bb3b8e9b6
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,703 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 2/3] test/rcu_qsbr: add API and functional tests
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
| 2 +
| 12 +
| 7 +-
| 1014 +++++++++++++++++++++++++++++++++
| 703 +++++++++++++++++++++++
5 files changed, 1737 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
--git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
--git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
--git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..e3e566bce 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -136,7 +138,8 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
--git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..b16872de5
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
--git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..bb3b8e9b6
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,703 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 3/3] doc/rcu: add lib_rcu documentation
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-21 16:40 ` [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism Thomas Monjalon
4 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v6 3/3] doc/rcu: add lib_rcu documentation
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-17 4:13 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 4:13 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 1:45 ` Honnappa Nagarahalli
2019-04-17 1:45 ` Honnappa Nagarahalli
@ 2019-04-17 13:39 ` Ananyev, Konstantin
2019-04-17 13:39 ` Ananyev, Konstantin
` (2 more replies)
1 sibling, 3 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-17 13:39 UTC (permalink / raw)
To: Honnappa Nagarahalli, Stephen Hemminger
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
> > > > > > > > > > > >
> > > > > > > > > > > > After evaluating long term API/ABI issues, I think
> > > > > > > > > > > > you need to get rid of almost all use of inline and
> > > > > > > > > > > > visible structures. Yes it might be marginally
> > > > > > > > > > > > slower, but you thank me
> > > > > > the first time you have to fix something.
> > > > > > > > > > > >
> > > > > > > > > > > Agree, I was planning on another version to address
> > > > > > > > > > > this (I am yet
> > > > > > to take a look at your patch addressing the ABI).
> > > > > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > > > > For the inline functions, is the plan to convert all
> > > > > > > > > > > the inline functions in DPDK? If yes, I think we need
> > > > > > > > > > > to consider the performance
> > > > > > > > > > difference. May be consider L3-fwd application, change
> > > > > > > > > > all the
> > > > > > inline functions in its path and run a test?
> > > > > > > > > >
> > > > > > > > > > Every function that is not in the direct datapath should
> > > > > > > > > > not be
> > > > > > inline.
> > > > > > > > > > Exceptions or things like rx/tx burst, ring
> > > > > > > > > > enqueue/dequeue, and packet alloc/free
> > > > > I do not understand how DPDK can claim ABI compatibility if we
> > > > > have
> > > > inline functions (unless we freeze any development in these inline
> > > > functions forever).
> > > > >
> > > > > > > > >
> > > > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > > > > another synchronization mechanism after all (just a bit
> > > > > > > > > more
> > > > sophisticated).
> > > > > > > > > Konstantin
> > > > > > > >
> > > > > > > > If you look at the other userspace RCU, you wil see that the
> > > > > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > > > > rcu_reference/rcu_assign_pointer.
> > > > > > > >
> > > > > > > > The synchronization logic is all real functions.
> > > > > > >
> > > > > > > In fact, I think urcu provides both flavors:
> > > > > > > https://github.com/urcu/userspace-
> > > > > > rcu/blob/master/include/urcu/static/
> > > > > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > > > > If we gone all the way to create our own version of rcu, we
> > > > > > > probably want it to be as fast as possible (I know that main
> > > > > > > speedup should come from the fact that readers don't have to
> > > > > > > wait for writer to finish, but still...)
> > > > > > >
> > > > > > > Konstantin
> > > > > > >
> > > > > >
> > > > > > Having locking functions inline is already a problem in current
> > releases.
> > > > > > The implementation can not be improved without breaking ABI (or
> > > > > > doing special workarounds like lock v2)
> > > > > I think ABI and inline function discussion needs to be taken up in
> > > > > a
> > > > different thread.
> > > > >
> > > > > Currently, I am looking to hide the structure visibility. I looked
> > > > > at your
> > > > patch [1], it is a different case than what I have in this patch. It
> > > > is a pretty generic use case as well (similar situation exists in
> > > > other libraries). I think a generic solution should be agreed upon.
> > > > >
> > > > > If we have to hide the structure content, the handle to QS
> > > > > variable
> > > > returned to the application needs to be opaque. I suggest using 'void *'
> > > > behind which any structure can be used.
> > > > >
> > > > > typedef void * rte_rcu_qsbr_t;
> > > > > typedef void * rte_hash_t;
> > > > >
> > > > > But it requires typecasting.
> > > > >
> > > > > [1] http://patchwork.dpdk.org/cover/52609/
> > > >
> > > > C allows structure to be defined without knowing what is in it
> > therefore.
> > > >
> > > > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> > > >
> > > > is preferred (or do it without typedef)
> > > >
> > > > struct rte_rcu_qsbr;
> > >
> > > I see that rte_hash library uses the same approach (struct rte_hash in
> > rte_hash.h, though it is marking as internal). But the ABI Laboratory tool
> > [1] seems to be reporting incorrect numbers for this library even though
> > the internal structure is changed.
> > >
> > > [1]
> > > https://abi-
> > laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.0
> > > 2&v2=current&obj=66794&kind=abi
> >
> > The problem is rte_hash structure is exposed as part of ABI in
> > rte_cuckoo_hash.h This was a mistake.
> Do you mean, due to the use of structure with the same name? I am wondering if it is just a tools issue. The application is not supposed to
> include rte_cuckoo_hash.h.
>
> For the RCU library, we either need to go all functions or leave it the way it is. I do not see a point in trying to hide the internal structure
> while having inline functions.
>
> I converted the inline functions to function calls.
>
> Testing on Arm platform (results *are* repeatable) shows very minimal drop (0.1% to 0.2%) in performance while using lock-free rte_hash
> data structure. But one of the test cases which is just spinning shows good amount of drop (41%).
>
> Testing on x86 (Xeon Gold 6132 CPU @ 2.60GHz, results *are* pretty repeatable) shows performance improvements (7% to 8%) while using
> lock-free rte_hash data structure. The test cases which is just spinning show significant drop (14%, 155%, 231%).
> Konstantin, any thoughts on the results?
The fact that function show better result than inline (even for hash) is sort of surprise to me.
Don't have any good explanation off-hand, but the actual numbers for hash test are huge by itself...
In general, I still think that sync primitives better to stay inlined - there is no much point to create ones
and then figure out that no-one using them because they are too slow.
Though if there is no real perf difference between inlined and normal - no point to keep it inlined.
About RCU lib, my thought to have inlined version for 19.05 and do further perf testing with it
(as I remember there were suggestions about using it in l3fwd for guarding routing table or so).
If we'll find there is no real difference - move it to not-inlined version in 19.08.
It is experimental for now - so could be changed without formal ABI breakage.
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 13:39 ` Ananyev, Konstantin
@ 2019-04-17 13:39 ` Ananyev, Konstantin
2019-04-17 14:02 ` Honnappa Nagarahalli
2019-04-17 14:18 ` Thomas Monjalon
2 siblings, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-17 13:39 UTC (permalink / raw)
To: Honnappa Nagarahalli, Stephen Hemminger
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd, nd
> > > > > > > > > > > >
> > > > > > > > > > > > After evaluating long term API/ABI issues, I think
> > > > > > > > > > > > you need to get rid of almost all use of inline and
> > > > > > > > > > > > visible structures. Yes it might be marginally
> > > > > > > > > > > > slower, but you thank me
> > > > > > the first time you have to fix something.
> > > > > > > > > > > >
> > > > > > > > > > > Agree, I was planning on another version to address
> > > > > > > > > > > this (I am yet
> > > > > > to take a look at your patch addressing the ABI).
> > > > > > > > > > > The structure visibility definitely needs to be addressed.
> > > > > > > > > > > For the inline functions, is the plan to convert all
> > > > > > > > > > > the inline functions in DPDK? If yes, I think we need
> > > > > > > > > > > to consider the performance
> > > > > > > > > > difference. May be consider L3-fwd application, change
> > > > > > > > > > all the
> > > > > > inline functions in its path and run a test?
> > > > > > > > > >
> > > > > > > > > > Every function that is not in the direct datapath should
> > > > > > > > > > not be
> > > > > > inline.
> > > > > > > > > > Exceptions or things like rx/tx burst, ring
> > > > > > > > > > enqueue/dequeue, and packet alloc/free
> > > > > I do not understand how DPDK can claim ABI compatibility if we
> > > > > have
> > > > inline functions (unless we freeze any development in these inline
> > > > functions forever).
> > > > >
> > > > > > > > >
> > > > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > > > I think rcu should be one of such exceptions - it is just
> > > > > > > > > another synchronization mechanism after all (just a bit
> > > > > > > > > more
> > > > sophisticated).
> > > > > > > > > Konstantin
> > > > > > > >
> > > > > > > > If you look at the other userspace RCU, you wil see that the
> > > > > > > > only inlines are the rcu_read_lock,rcu_read_unlock and
> > > > > > rcu_reference/rcu_assign_pointer.
> > > > > > > >
> > > > > > > > The synchronization logic is all real functions.
> > > > > > >
> > > > > > > In fact, I think urcu provides both flavors:
> > > > > > > https://github.com/urcu/userspace-
> > > > > > rcu/blob/master/include/urcu/static/
> > > > > > > urcu-qsbr.h I still don't understand why we have to treat it
> > > > > > > differently then let say spin-lock/ticket-lock or rwlock.
> > > > > > > If we gone all the way to create our own version of rcu, we
> > > > > > > probably want it to be as fast as possible (I know that main
> > > > > > > speedup should come from the fact that readers don't have to
> > > > > > > wait for writer to finish, but still...)
> > > > > > >
> > > > > > > Konstantin
> > > > > > >
> > > > > >
> > > > > > Having locking functions inline is already a problem in current
> > releases.
> > > > > > The implementation can not be improved without breaking ABI (or
> > > > > > doing special workarounds like lock v2)
> > > > > I think ABI and inline function discussion needs to be taken up in
> > > > > a
> > > > different thread.
> > > > >
> > > > > Currently, I am looking to hide the structure visibility. I looked
> > > > > at your
> > > > patch [1], it is a different case than what I have in this patch. It
> > > > is a pretty generic use case as well (similar situation exists in
> > > > other libraries). I think a generic solution should be agreed upon.
> > > > >
> > > > > If we have to hide the structure content, the handle to QS
> > > > > variable
> > > > returned to the application needs to be opaque. I suggest using 'void *'
> > > > behind which any structure can be used.
> > > > >
> > > > > typedef void * rte_rcu_qsbr_t;
> > > > > typedef void * rte_hash_t;
> > > > >
> > > > > But it requires typecasting.
> > > > >
> > > > > [1] http://patchwork.dpdk.org/cover/52609/
> > > >
> > > > C allows structure to be defined without knowing what is in it
> > therefore.
> > > >
> > > > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> > > >
> > > > is preferred (or do it without typedef)
> > > >
> > > > struct rte_rcu_qsbr;
> > >
> > > I see that rte_hash library uses the same approach (struct rte_hash in
> > rte_hash.h, though it is marking as internal). But the ABI Laboratory tool
> > [1] seems to be reporting incorrect numbers for this library even though
> > the internal structure is changed.
> > >
> > > [1]
> > > https://abi-
> > laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.0
> > > 2&v2=current&obj=66794&kind=abi
> >
> > The problem is rte_hash structure is exposed as part of ABI in
> > rte_cuckoo_hash.h This was a mistake.
> Do you mean, due to the use of structure with the same name? I am wondering if it is just a tools issue. The application is not supposed to
> include rte_cuckoo_hash.h.
>
> For the RCU library, we either need to go all functions or leave it the way it is. I do not see a point in trying to hide the internal structure
> while having inline functions.
>
> I converted the inline functions to function calls.
>
> Testing on Arm platform (results *are* repeatable) shows very minimal drop (0.1% to 0.2%) in performance while using lock-free rte_hash
> data structure. But one of the test cases which is just spinning shows good amount of drop (41%).
>
> Testing on x86 (Xeon Gold 6132 CPU @ 2.60GHz, results *are* pretty repeatable) shows performance improvements (7% to 8%) while using
> lock-free rte_hash data structure. The test cases which is just spinning show significant drop (14%, 155%, 231%).
> Konstantin, any thoughts on the results?
The fact that function show better result than inline (even for hash) is sort of surprise to me.
Don't have any good explanation off-hand, but the actual numbers for hash test are huge by itself...
In general, I still think that sync primitives better to stay inlined - there is no much point to create ones
and then figure out that no-one using them because they are too slow.
Though if there is no real perf difference between inlined and normal - no point to keep it inlined.
About RCU lib, my thought to have inlined version for 19.05 and do further perf testing with it
(as I remember there were suggestions about using it in l3fwd for guarding routing table or so).
If we'll find there is no real difference - move it to not-inlined version in 19.08.
It is experimental for now - so could be changed without formal ABI breakage.
Konstantin
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 13:39 ` Ananyev, Konstantin
2019-04-17 13:39 ` Ananyev, Konstantin
@ 2019-04-17 14:02 ` Honnappa Nagarahalli
2019-04-17 14:02 ` Honnappa Nagarahalli
2019-04-17 14:18 ` Thomas Monjalon
2 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 14:02 UTC (permalink / raw)
To: Ananyev, Konstantin, Stephen Hemminger
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > > > > > > > > >
> > > > > > > > > > > > > After evaluating long term API/ABI issues, I
> > > > > > > > > > > > > think you need to get rid of almost all use of
> > > > > > > > > > > > > inline and visible structures. Yes it might be
> > > > > > > > > > > > > marginally slower, but you thank me
> > > > > > > the first time you have to fix something.
> > > > > > > > > > > > >
> > > > > > > > > > > > Agree, I was planning on another version to
> > > > > > > > > > > > address this (I am yet
> > > > > > > to take a look at your patch addressing the ABI).
> > > > > > > > > > > > The structure visibility definitely needs to be
> addressed.
> > > > > > > > > > > > For the inline functions, is the plan to convert
> > > > > > > > > > > > all the inline functions in DPDK? If yes, I think
> > > > > > > > > > > > we need to consider the performance
> > > > > > > > > > > difference. May be consider L3-fwd application,
> > > > > > > > > > > change all the
> > > > > > > inline functions in its path and run a test?
> > > > > > > > > > >
> > > > > > > > > > > Every function that is not in the direct datapath
> > > > > > > > > > > should not be
> > > > > > > inline.
> > > > > > > > > > > Exceptions or things like rx/tx burst, ring
> > > > > > > > > > > enqueue/dequeue, and packet alloc/free
> > > > > > I do not understand how DPDK can claim ABI compatibility if we
> > > > > > have
> > > > > inline functions (unless we freeze any development in these
> > > > > inline functions forever).
> > > > > >
> > > > > > > > > >
> > > > > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > > > > I think rcu should be one of such exceptions - it is
> > > > > > > > > > just another synchronization mechanism after all (just
> > > > > > > > > > a bit more
> > > > > sophisticated).
> > > > > > > > > > Konstantin
> > > > > > > > >
> > > > > > > > > If you look at the other userspace RCU, you wil see that
> > > > > > > > > the only inlines are the rcu_read_lock,rcu_read_unlock
> > > > > > > > > and
> > > > > > > rcu_reference/rcu_assign_pointer.
> > > > > > > > >
> > > > > > > > > The synchronization logic is all real functions.
> > > > > > > >
> > > > > > > > In fact, I think urcu provides both flavors:
> > > > > > > > https://github.com/urcu/userspace-
> > > > > > > rcu/blob/master/include/urcu/static/
> > > > > > > > urcu-qsbr.h I still don't understand why we have to treat
> > > > > > > > it differently then let say spin-lock/ticket-lock or rwlock.
> > > > > > > > If we gone all the way to create our own version of rcu,
> > > > > > > > we probably want it to be as fast as possible (I know that
> > > > > > > > main speedup should come from the fact that readers don't
> > > > > > > > have to wait for writer to finish, but still...)
> > > > > > > >
> > > > > > > > Konstantin
> > > > > > > >
> > > > > > >
> > > > > > > Having locking functions inline is already a problem in
> > > > > > > current
> > > releases.
> > > > > > > The implementation can not be improved without breaking ABI
> > > > > > > (or doing special workarounds like lock v2)
> > > > > > I think ABI and inline function discussion needs to be taken
> > > > > > up in a
> > > > > different thread.
> > > > > >
> > > > > > Currently, I am looking to hide the structure visibility. I
> > > > > > looked at your
> > > > > patch [1], it is a different case than what I have in this
> > > > > patch. It is a pretty generic use case as well (similar
> > > > > situation exists in other libraries). I think a generic solution should
> be agreed upon.
> > > > > >
> > > > > > If we have to hide the structure content, the handle to QS
> > > > > > variable
> > > > > returned to the application needs to be opaque. I suggest using
> 'void *'
> > > > > behind which any structure can be used.
> > > > > >
> > > > > > typedef void * rte_rcu_qsbr_t; typedef void * rte_hash_t;
> > > > > >
> > > > > > But it requires typecasting.
> > > > > >
> > > > > > [1] http://patchwork.dpdk.org/cover/52609/
> > > > >
> > > > > C allows structure to be defined without knowing what is in it
> > > therefore.
> > > > >
> > > > > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> > > > >
> > > > > is preferred (or do it without typedef)
> > > > >
> > > > > struct rte_rcu_qsbr;
> > > >
> > > > I see that rte_hash library uses the same approach (struct
> > > > rte_hash in
> > > rte_hash.h, though it is marking as internal). But the ABI
> > > Laboratory tool [1] seems to be reporting incorrect numbers for this
> > > library even though the internal structure is changed.
> > > >
> > > > [1]
> > > > https://abi-
> > > laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.0
> > > > 2&v2=current&obj=66794&kind=abi
> > >
> > > The problem is rte_hash structure is exposed as part of ABI in
> > > rte_cuckoo_hash.h This was a mistake.
> > Do you mean, due to the use of structure with the same name? I am
> > wondering if it is just a tools issue. The application is not supposed to
> include rte_cuckoo_hash.h.
> >
> > For the RCU library, we either need to go all functions or leave it
> > the way it is. I do not see a point in trying to hide the internal structure
> while having inline functions.
> >
> > I converted the inline functions to function calls.
> >
> > Testing on Arm platform (results *are* repeatable) shows very minimal
> > drop (0.1% to 0.2%) in performance while using lock-free rte_hash data
> structure. But one of the test cases which is just spinning shows good
> amount of drop (41%).
> >
> > Testing on x86 (Xeon Gold 6132 CPU @ 2.60GHz, results *are* pretty
> > repeatable) shows performance improvements (7% to 8%) while using
> lock-free rte_hash data structure. The test cases which is just spinning
> show significant drop (14%, 155%, 231%).
> > Konstantin, any thoughts on the results?
>
> The fact that function show better result than inline (even for hash) is sort
> of surprise to me.
It was a surprise to me too and counter-intuitive to my understanding.
> Don't have any good explanation off-hand, but the actual numbers for
> hash test are huge by itself...
>
> In general, I still think that sync primitives better to stay inlined - there is
> no much point to create ones and then figure out that no-one using them
> because they are too slow.
> Though if there is no real perf difference between inlined and normal - no
> point to keep it inlined.
> About RCU lib, my thought to have inlined version for 19.05 and do
> further perf testing with it (as I remember there were suggestions about
> using it in l3fwd for guarding routing table or so).
Yes, there is more work planned to integrate the library better which might provide more insight.
> If we'll find there is no real difference - move it to not-inlined version in
> 19.08.
+1.
> It is experimental for now - so could be changed without formal ABI
> breakage.
>
> Konstantin
>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 14:02 ` Honnappa Nagarahalli
@ 2019-04-17 14:02 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-17 14:02 UTC (permalink / raw)
To: Ananyev, Konstantin, Stephen Hemminger
Cc: paulmck, Kovacevic, Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > > > > > > > > > > > >
> > > > > > > > > > > > > After evaluating long term API/ABI issues, I
> > > > > > > > > > > > > think you need to get rid of almost all use of
> > > > > > > > > > > > > inline and visible structures. Yes it might be
> > > > > > > > > > > > > marginally slower, but you thank me
> > > > > > > the first time you have to fix something.
> > > > > > > > > > > > >
> > > > > > > > > > > > Agree, I was planning on another version to
> > > > > > > > > > > > address this (I am yet
> > > > > > > to take a look at your patch addressing the ABI).
> > > > > > > > > > > > The structure visibility definitely needs to be
> addressed.
> > > > > > > > > > > > For the inline functions, is the plan to convert
> > > > > > > > > > > > all the inline functions in DPDK? If yes, I think
> > > > > > > > > > > > we need to consider the performance
> > > > > > > > > > > difference. May be consider L3-fwd application,
> > > > > > > > > > > change all the
> > > > > > > inline functions in its path and run a test?
> > > > > > > > > > >
> > > > > > > > > > > Every function that is not in the direct datapath
> > > > > > > > > > > should not be
> > > > > > > inline.
> > > > > > > > > > > Exceptions or things like rx/tx burst, ring
> > > > > > > > > > > enqueue/dequeue, and packet alloc/free
> > > > > > I do not understand how DPDK can claim ABI compatibility if we
> > > > > > have
> > > > > inline functions (unless we freeze any development in these
> > > > > inline functions forever).
> > > > > >
> > > > > > > > > >
> > > > > > > > > > Plus synchronization routines: spin/rwlock/barrier, etc.
> > > > > > > > > > I think rcu should be one of such exceptions - it is
> > > > > > > > > > just another synchronization mechanism after all (just
> > > > > > > > > > a bit more
> > > > > sophisticated).
> > > > > > > > > > Konstantin
> > > > > > > > >
> > > > > > > > > If you look at the other userspace RCU, you wil see that
> > > > > > > > > the only inlines are the rcu_read_lock,rcu_read_unlock
> > > > > > > > > and
> > > > > > > rcu_reference/rcu_assign_pointer.
> > > > > > > > >
> > > > > > > > > The synchronization logic is all real functions.
> > > > > > > >
> > > > > > > > In fact, I think urcu provides both flavors:
> > > > > > > > https://github.com/urcu/userspace-
> > > > > > > rcu/blob/master/include/urcu/static/
> > > > > > > > urcu-qsbr.h I still don't understand why we have to treat
> > > > > > > > it differently then let say spin-lock/ticket-lock or rwlock.
> > > > > > > > If we gone all the way to create our own version of rcu,
> > > > > > > > we probably want it to be as fast as possible (I know that
> > > > > > > > main speedup should come from the fact that readers don't
> > > > > > > > have to wait for writer to finish, but still...)
> > > > > > > >
> > > > > > > > Konstantin
> > > > > > > >
> > > > > > >
> > > > > > > Having locking functions inline is already a problem in
> > > > > > > current
> > > releases.
> > > > > > > The implementation can not be improved without breaking ABI
> > > > > > > (or doing special workarounds like lock v2)
> > > > > > I think ABI and inline function discussion needs to be taken
> > > > > > up in a
> > > > > different thread.
> > > > > >
> > > > > > Currently, I am looking to hide the structure visibility. I
> > > > > > looked at your
> > > > > patch [1], it is a different case than what I have in this
> > > > > patch. It is a pretty generic use case as well (similar
> > > > > situation exists in other libraries). I think a generic solution should
> be agreed upon.
> > > > > >
> > > > > > If we have to hide the structure content, the handle to QS
> > > > > > variable
> > > > > returned to the application needs to be opaque. I suggest using
> 'void *'
> > > > > behind which any structure can be used.
> > > > > >
> > > > > > typedef void * rte_rcu_qsbr_t; typedef void * rte_hash_t;
> > > > > >
> > > > > > But it requires typecasting.
> > > > > >
> > > > > > [1] http://patchwork.dpdk.org/cover/52609/
> > > > >
> > > > > C allows structure to be defined without knowing what is in it
> > > therefore.
> > > > >
> > > > > typedef struct rte_rcu_qsbr rte_rcu_qsbr_t;
> > > > >
> > > > > is preferred (or do it without typedef)
> > > > >
> > > > > struct rte_rcu_qsbr;
> > > >
> > > > I see that rte_hash library uses the same approach (struct
> > > > rte_hash in
> > > rte_hash.h, though it is marking as internal). But the ABI
> > > Laboratory tool [1] seems to be reporting incorrect numbers for this
> > > library even though the internal structure is changed.
> > > >
> > > > [1]
> > > > https://abi-
> > > laboratory.pro/index.php?view=compat_report&l=dpdk&v1=19.0
> > > > 2&v2=current&obj=66794&kind=abi
> > >
> > > The problem is rte_hash structure is exposed as part of ABI in
> > > rte_cuckoo_hash.h This was a mistake.
> > Do you mean, due to the use of structure with the same name? I am
> > wondering if it is just a tools issue. The application is not supposed to
> include rte_cuckoo_hash.h.
> >
> > For the RCU library, we either need to go all functions or leave it
> > the way it is. I do not see a point in trying to hide the internal structure
> while having inline functions.
> >
> > I converted the inline functions to function calls.
> >
> > Testing on Arm platform (results *are* repeatable) shows very minimal
> > drop (0.1% to 0.2%) in performance while using lock-free rte_hash data
> structure. But one of the test cases which is just spinning shows good
> amount of drop (41%).
> >
> > Testing on x86 (Xeon Gold 6132 CPU @ 2.60GHz, results *are* pretty
> > repeatable) shows performance improvements (7% to 8%) while using
> lock-free rte_hash data structure. The test cases which is just spinning
> show significant drop (14%, 155%, 231%).
> > Konstantin, any thoughts on the results?
>
> The fact that function show better result than inline (even for hash) is sort
> of surprise to me.
It was a surprise to me too and counter-intuitive to my understanding.
> Don't have any good explanation off-hand, but the actual numbers for
> hash test are huge by itself...
>
> In general, I still think that sync primitives better to stay inlined - there is
> no much point to create ones and then figure out that no-one using them
> because they are too slow.
> Though if there is no real perf difference between inlined and normal - no
> point to keep it inlined.
> About RCU lib, my thought to have inlined version for 19.05 and do
> further perf testing with it (as I remember there were suggestions about
> using it in l3fwd for guarding routing table or so).
Yes, there is more work planned to integrate the library better which might provide more insight.
> If we'll find there is no real difference - move it to not-inlined version in
> 19.08.
+1.
> It is experimental for now - so could be changed without formal ABI
> breakage.
>
> Konstantin
>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 13:39 ` Ananyev, Konstantin
2019-04-17 13:39 ` Ananyev, Konstantin
2019-04-17 14:02 ` Honnappa Nagarahalli
@ 2019-04-17 14:18 ` Thomas Monjalon
2019-04-17 14:18 ` Thomas Monjalon
2 siblings, 1 reply; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-17 14:18 UTC (permalink / raw)
To: Ananyev, Konstantin, Honnappa Nagarahalli, Stephen Hemminger
Cc: dev, paulmck, Kovacevic, Marko, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
17/04/2019 15:39, Ananyev, Konstantin:
> In general, I still think that sync primitives better to stay inlined - there is no much point to create ones
> and then figure out that no-one using them because they are too slow.
> Though if there is no real perf difference between inlined and normal - no point to keep it inlined.
> About RCU lib, my thought to have inlined version for 19.05 and do further perf testing with it
> (as I remember there were suggestions about using it in l3fwd for guarding routing table or so).
> If we'll find there is no real difference - move it to not-inlined version in 19.08.
> It is experimental for now - so could be changed without formal ABI breakage.
I agree, it looks reasonnable to take v6 of RCU patches
as an experimental implementation.
Then we can run some tests and discuss about inlining or not
before promoting it as a stable API.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 14:18 ` Thomas Monjalon
@ 2019-04-17 14:18 ` Thomas Monjalon
0 siblings, 0 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-17 14:18 UTC (permalink / raw)
To: Ananyev, Konstantin, Honnappa Nagarahalli, Stephen Hemminger
Cc: dev, paulmck, Kovacevic, Marko, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
17/04/2019 15:39, Ananyev, Konstantin:
> In general, I still think that sync primitives better to stay inlined - there is no much point to create ones
> and then figure out that no-one using them because they are too slow.
> Though if there is no real perf difference between inlined and normal - no point to keep it inlined.
> About RCU lib, my thought to have inlined version for 19.05 and do further perf testing with it
> (as I remember there were suggestions about using it in l3fwd for guarding routing table or so).
> If we'll find there is no real difference - move it to not-inlined version in 19.08.
> It is experimental for now - so could be changed without formal ABI breakage.
I agree, it looks reasonnable to take v6 of RCU patches
as an experimental implementation.
Then we can run some tests and discuss about inlining or not
before promoting it as a stable API.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 1/3] rcu: " Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
@ 2019-04-19 19:19 ` Paul E. McKenney
2019-04-19 19:19 ` Paul E. McKenney
2019-04-23 1:08 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-19 19:19 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Tue, Apr 16, 2019 at 11:13:57PM -0500, Honnappa Nagarahalli wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Looks much better!
One more suggestion below, on rte_rcu_qsbr_thread_offline().
Thanx, Paul
> ---
> MAINTAINERS | 5 +
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 ++
> lib/librte_rcu/meson.build | 7 +
> lib/librte_rcu/rte_rcu_qsbr.c | 257 ++++++++++++
> lib/librte_rcu/rte_rcu_qsbr.h | 629 +++++++++++++++++++++++++++++
> lib/librte_rcu/rte_rcu_version.map | 12 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 10 files changed, 943 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a08583471..ae54f37db 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1274,6 +1274,11 @@ F: examples/bpf/
> F: app/test/test_bpf.c
> F: doc/guides/prog_guide/bpf_lib.rst
>
> +RCU - EXPERIMENTAL
> +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> +F: lib/librte_rcu/
> +F: doc/guides/prog_guide/rcu_lib.rst
> +
>
> Test Applications
> -----------------
> diff --git a/config/common_base b/config/common_base
> index 7fb0dedb6..f50d26c30 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
> #
> CONFIG_RTE_LIBRTE_TELEMETRY=n
>
> +#
> +# Compile librte_rcu
> +#
> +CONFIG_RTE_LIBRTE_RCU=y
> +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> +
> #
> # Compile librte_lpm
> #
> diff --git a/lib/Makefile b/lib/Makefile
> index 26021d0c0..791e0d991 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
> +DEPDIRS-librte_rcu := librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
> diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
> new file mode 100644
> index 000000000..6aa677bd1
> --- /dev/null
> +++ b/lib/librte_rcu/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_rcu.a
> +
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> +LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_rcu_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> new file mode 100644
> index 000000000..0c2d5a2e0
> --- /dev/null
> +++ b/lib/librte_rcu/meson.build
> @@ -0,0 +1,7 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +allow_experimental_apis = true
> +
> +sources = files('rte_rcu_qsbr.c')
> +headers = files('rte_rcu_qsbr.h')
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..466592a42
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,257 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (max_threads == 0) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid max_threads %u\n",
> + __func__, max_threads);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + /* Add the size of the registered thread ID bitmap array */
> + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> +
> + return sz;
> +}
> +
> +/* Initialize a quiescent state variable */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (v == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = rte_rcu_qsbr_get_memsize(max_threads);
> + if (sz == 1)
> + return 1;
> +
> + /* Set all the threads to offline */
> + memset(v, 0, sz);
> + v->max_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +
> + return 0;
> +}
> +
> +/* Register a reader thread to report its quiescent state
> + * on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already registered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & 1UL << id)
> + return 0;
> +
> + do {
> + new_bmap = old_bmap | (1UL << id);
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_add(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & (1UL << id))
> + /* Someone else registered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Wait till the reader threads have entered quiescent state. */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + t = rte_rcu_qsbr_start(v);
> +
> + /* If the current thread has readside critical section,
> + * update its quiescent state status.
> + */
> + if (thread_id != RTE_QSBR_THRID_INVALID)
> + rte_rcu_qsbr_quiescent(v, thread_id);
> +
> + /* Wait for other readers to enter quiescent state */
> + rte_rcu_qsbr_check(v, t, true);
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t;
> +
> + if (v == NULL || f == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->max_threads));
> + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> + fprintf(f, " Current # threads = %u\n", v->num_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE);
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu\n", t,
> + __atomic_load_n(
> + &v->qsbr_cnt[i].cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +
> + return 0;
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..73fa3354e
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,629 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +#include <rte_atomic.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
> +#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline
> + * 64b counter is used to avoid adding more code to address
> + * counter overflow. Changing this to 32b would require additional
> + * changes to various APIs.
> + */
> + uint32_t lock_cnt;
> + /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/* RTE Quiescent State variable structure.
> + * This structure has two elements that vary in size based on the
> + * 'max_threads' parameter.
> + * 1) Quiescent state counter array
> + * 2) Register thread ID array
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple concurrent quiescent state queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t num_threads;
> + /**< Number of threads currently using this QS variable */
> + uint32_t max_threads;
> + /**< Maximum number of threads using this QS variable */
> +
> + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> + /**< Quiescent state counter array of 'max_threads' elements */
> +
> + /**< Registered thread IDs are stored in a bitmap array,
> + * after the quiescent state counter array.
> + */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * On success - size of memory in bytes required for this QS variable.
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0 or 'v' is NULL.
> + *
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable. thread_id is a value between 0 and (max_threads - 1).
> + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing quiescent state queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_quiescent. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its quiescent state.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> +#ifdef RTE_ARCH_X86_64
> + /* rte_smp_mb() for x86 is lighter */
> + rte_smp_mb();
> +#else
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a registered reader thread from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This can be called during initialization or as part of the packet
> + * processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * rte_rcu_qsbr_check API will not wait for the reader thread with
> + * this thread ID to report its quiescent state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
I suggest adding an assertion that v->qsbr_cnt[thread_id].lock_cnt
is equal to zero. This makes it easier to find a misplaced
rte_rcu_qsbr_thread_offline(). Similar situation as the assertion
that you added to rte_rcu_qsbr_quiescent().
> +
> + /* The reader can go offline only after the load of the
> + * data structure is completed. i.e. any load of the
> + * data strcture can not move after this store.
> + */
> +
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Acquire a lock for accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called before
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
> + * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
> + * rte_rcu_qsbr_check API will verify that this counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Increment the lock counter */
> + __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_ACQUIRE);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Release a lock after accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called after
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
> + * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
> + * counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Decrement the lock counter */
> + __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_RELEASE);
> +
> + if (v->qsbr_cnt[thread_id].lock_cnt)
> + rte_log(RTE_LOG_WARNING, rcu_log_type,
> + "%s(): Lock counter %u. Nested locks?\n",
> + __func__, v->qsbr_cnt[thread_id].lock_cnt);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Ask the reader threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from worker threads.
> + *
> + * @param v
> + * QS variable
> + * @return
> + * - This is the token for this call of the API. This should be
> + * passed to rte_rcu_qsbr_check API.
> + */
> +static __rte_always_inline uint64_t __rte_experimental
> +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + /* Release the changes to the shared data structure.
> + * This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> +
> + return t;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for a reader thread.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the reader threads registered to report their quiescent state
> + * on the QS variable must call this API.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Update the quiescent state for the reader with this thread ID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + /* Validate that the lock counter is 0 */
> + if (v->qsbr_cnt[thread_id].lock_cnt)
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Lock counter %u, should be 0\n",
> + __func__, v->qsbr_cnt[thread_id].lock_cnt);
> +#endif
> +
> + /* Acquire the changes to the shared data structure released
> + * by rte_rcu_qsbr_start.
> + * Later loads of the shared data structure should not move
> + * above this load. Hence, use load-acquire.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> +
> + /* Inform the writer that updates are visible to this reader.
> + * Prior loads of the shared data structure should not move
> + * beyond this store. Hence use store-release.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELEASE);
> +
> + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> + __func__, t, thread_id);
> +}
> +
> +/* Check the quiescent state counter for registered threads only, assuming
> + * that not all threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i, j, id;
> + uint64_t bmap;
> + uint64_t c;
> + uint64_t *reg_thread_id;
> +
> + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> + i < v->num_elems;
> + i++, reg_thread_id++) {
> + /* Load the current registered thread bit map before
> + * loading the reader thread quiescent state counters.
> + */
> + bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + while (bmap) {
> + j = __builtin_ctzl(bmap);
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
> + __func__, t, wait, bmap, id + j);
> + c = __atomic_load_n(
> + &v->qsbr_cnt[id + j].cnt,
> + __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, id+j);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + /* This thread might have unregistered.
> + * Re-read the bitmap.
> + */
> + bmap = __atomic_load_n(reg_thread_id,
> + __ATOMIC_ACQUIRE);
> +
> + continue;
> + }
> +
> + bmap &= ~(1UL << j);
> + }
> + }
> +
> + return 1;
> +}
> +
> +/* Check the quiescent state counter for all threads, assuming that
> + * all the threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i;
> + struct rte_rcu_qsbr_cnt *cnt;
> + uint64_t c;
> +
> + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> + __func__, t, wait, i);
> + while (1) {
> + c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, i);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
> + break;
> +
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + }
> + }
> +
> + return 1;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the reader threads have entered the quiescent state
> + * referenced by token.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * If this API is called with 'wait' set to true, the following
> + * factors must be considered:
> + *
> + * 1) If the calling thread is also reporting the status on the
> + * same QS variable, it must update the quiescent state status, before
> + * calling this API.
> + *
> + * 2) In addition, while calling from multiple threads, only
> + * one of those threads can be reporting the quiescent state status
> + * on a given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param t
> + * Token returned by rte_rcu_qsbr_start API
> + * @param wait
> + * If true, block till all the reader threads have completed entering
> + * the quiescent state referenced by token 't'.
> + * @return
> + * - 0 if all reader threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all reader threads have passed through specified number
> + * of quiescent states.
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + RTE_ASSERT(v != NULL);
> +
> + if (likely(v->num_threads == v->max_threads))
> + return __rcu_qsbr_check_all(v, t, wait);
> + else
> + return __rcu_qsbr_check_selective(v, t, wait);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Wait till the reader threads have entered quiescent state.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
> + * rte_rcu_qsbr_check APIs.
> + *
> + * If this API is called from multiple threads, only one of
> + * those threads can be reporting the quiescent state status on a
> + * given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Thread ID of the caller if it is registered to report quiescent state
> + * on this QS variable (i.e. the calling thread is also part of the
> + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> + */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Dump the details of a single QS variables to a file.
> + *
> + * It is NOT multi-thread safe.
> + *
> + * @param f
> + * A pointer to a file for output
> + * @param v
> + * QS variable
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - NULL parameters are passed
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_RCU_QSBR_H_ */
> diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
> new file mode 100644
> index 000000000..5ea8524db
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_version.map
> @@ -0,0 +1,12 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_rcu_qsbr_get_memsize;
> + rte_rcu_qsbr_init;
> + rte_rcu_qsbr_thread_register;
> + rte_rcu_qsbr_thread_unregister;
> + rte_rcu_qsbr_synchronize;
> + rte_rcu_qsbr_dump;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build
> index 595314d7d..67be10659 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'stack', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index abea16d48..ebe6d48a7 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-19 19:19 ` Paul E. McKenney
@ 2019-04-19 19:19 ` Paul E. McKenney
2019-04-23 1:08 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-19 19:19 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Tue, Apr 16, 2019 at 11:13:57PM -0500, Honnappa Nagarahalli wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Looks much better!
One more suggestion below, on rte_rcu_qsbr_thread_offline().
Thanx, Paul
> ---
> MAINTAINERS | 5 +
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 ++
> lib/librte_rcu/meson.build | 7 +
> lib/librte_rcu/rte_rcu_qsbr.c | 257 ++++++++++++
> lib/librte_rcu/rte_rcu_qsbr.h | 629 +++++++++++++++++++++++++++++
> lib/librte_rcu/rte_rcu_version.map | 12 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 10 files changed, 943 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a08583471..ae54f37db 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1274,6 +1274,11 @@ F: examples/bpf/
> F: app/test/test_bpf.c
> F: doc/guides/prog_guide/bpf_lib.rst
>
> +RCU - EXPERIMENTAL
> +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> +F: lib/librte_rcu/
> +F: doc/guides/prog_guide/rcu_lib.rst
> +
>
> Test Applications
> -----------------
> diff --git a/config/common_base b/config/common_base
> index 7fb0dedb6..f50d26c30 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
> #
> CONFIG_RTE_LIBRTE_TELEMETRY=n
>
> +#
> +# Compile librte_rcu
> +#
> +CONFIG_RTE_LIBRTE_RCU=y
> +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> +
> #
> # Compile librte_lpm
> #
> diff --git a/lib/Makefile b/lib/Makefile
> index 26021d0c0..791e0d991 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
> +DEPDIRS-librte_rcu := librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
> diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
> new file mode 100644
> index 000000000..6aa677bd1
> --- /dev/null
> +++ b/lib/librte_rcu/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_rcu.a
> +
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> +LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_rcu_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> new file mode 100644
> index 000000000..0c2d5a2e0
> --- /dev/null
> +++ b/lib/librte_rcu/meson.build
> @@ -0,0 +1,7 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +allow_experimental_apis = true
> +
> +sources = files('rte_rcu_qsbr.c')
> +headers = files('rte_rcu_qsbr.h')
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..466592a42
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,257 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (max_threads == 0) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid max_threads %u\n",
> + __func__, max_threads);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + /* Add the size of the registered thread ID bitmap array */
> + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> +
> + return sz;
> +}
> +
> +/* Initialize a quiescent state variable */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (v == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = rte_rcu_qsbr_get_memsize(max_threads);
> + if (sz == 1)
> + return 1;
> +
> + /* Set all the threads to offline */
> + memset(v, 0, sz);
> + v->max_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +
> + return 0;
> +}
> +
> +/* Register a reader thread to report its quiescent state
> + * on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already registered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & 1UL << id)
> + return 0;
> +
> + do {
> + new_bmap = old_bmap | (1UL << id);
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_add(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & (1UL << id))
> + /* Someone else registered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Wait till the reader threads have entered quiescent state. */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + t = rte_rcu_qsbr_start(v);
> +
> + /* If the current thread has readside critical section,
> + * update its quiescent state status.
> + */
> + if (thread_id != RTE_QSBR_THRID_INVALID)
> + rte_rcu_qsbr_quiescent(v, thread_id);
> +
> + /* Wait for other readers to enter quiescent state */
> + rte_rcu_qsbr_check(v, t, true);
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t;
> +
> + if (v == NULL || f == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->max_threads));
> + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> + fprintf(f, " Current # threads = %u\n", v->num_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE);
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu\n", t,
> + __atomic_load_n(
> + &v->qsbr_cnt[i].cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +
> + return 0;
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..73fa3354e
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,629 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +#include <rte_atomic.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
> +#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline
> + * 64b counter is used to avoid adding more code to address
> + * counter overflow. Changing this to 32b would require additional
> + * changes to various APIs.
> + */
> + uint32_t lock_cnt;
> + /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/* RTE Quiescent State variable structure.
> + * This structure has two elements that vary in size based on the
> + * 'max_threads' parameter.
> + * 1) Quiescent state counter array
> + * 2) Register thread ID array
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple concurrent quiescent state queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t num_threads;
> + /**< Number of threads currently using this QS variable */
> + uint32_t max_threads;
> + /**< Maximum number of threads using this QS variable */
> +
> + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> + /**< Quiescent state counter array of 'max_threads' elements */
> +
> + /**< Registered thread IDs are stored in a bitmap array,
> + * after the quiescent state counter array.
> + */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * On success - size of memory in bytes required for this QS variable.
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0 or 'v' is NULL.
> + *
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable. thread_id is a value between 0 and (max_threads - 1).
> + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing quiescent state queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_quiescent. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its quiescent state.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> +#ifdef RTE_ARCH_X86_64
> + /* rte_smp_mb() for x86 is lighter */
> + rte_smp_mb();
> +#else
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a registered reader thread from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This can be called during initialization or as part of the packet
> + * processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * rte_rcu_qsbr_check API will not wait for the reader thread with
> + * this thread ID to report its quiescent state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
I suggest adding an assertion that v->qsbr_cnt[thread_id].lock_cnt
is equal to zero. This makes it easier to find a misplaced
rte_rcu_qsbr_thread_offline(). Similar situation as the assertion
that you added to rte_rcu_qsbr_quiescent().
> +
> + /* The reader can go offline only after the load of the
> + * data structure is completed. i.e. any load of the
> + * data strcture can not move after this store.
> + */
> +
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Acquire a lock for accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called before
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
> + * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
> + * rte_rcu_qsbr_check API will verify that this counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Increment the lock counter */
> + __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_ACQUIRE);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Release a lock after accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called after
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
> + * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
> + * counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + /* Decrement the lock counter */
> + __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_RELEASE);
> +
> + if (v->qsbr_cnt[thread_id].lock_cnt)
> + rte_log(RTE_LOG_WARNING, rcu_log_type,
> + "%s(): Lock counter %u. Nested locks?\n",
> + __func__, v->qsbr_cnt[thread_id].lock_cnt);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Ask the reader threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from worker threads.
> + *
> + * @param v
> + * QS variable
> + * @return
> + * - This is the token for this call of the API. This should be
> + * passed to rte_rcu_qsbr_check API.
> + */
> +static __rte_always_inline uint64_t __rte_experimental
> +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + /* Release the changes to the shared data structure.
> + * This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> +
> + return t;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for a reader thread.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the reader threads registered to report their quiescent state
> + * on the QS variable must call this API.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Update the quiescent state for the reader with this thread ID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + /* Validate that the lock counter is 0 */
> + if (v->qsbr_cnt[thread_id].lock_cnt)
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Lock counter %u, should be 0\n",
> + __func__, v->qsbr_cnt[thread_id].lock_cnt);
> +#endif
> +
> + /* Acquire the changes to the shared data structure released
> + * by rte_rcu_qsbr_start.
> + * Later loads of the shared data structure should not move
> + * above this load. Hence, use load-acquire.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> +
> + /* Inform the writer that updates are visible to this reader.
> + * Prior loads of the shared data structure should not move
> + * beyond this store. Hence use store-release.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELEASE);
> +
> + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> + __func__, t, thread_id);
> +}
> +
> +/* Check the quiescent state counter for registered threads only, assuming
> + * that not all threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i, j, id;
> + uint64_t bmap;
> + uint64_t c;
> + uint64_t *reg_thread_id;
> +
> + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> + i < v->num_elems;
> + i++, reg_thread_id++) {
> + /* Load the current registered thread bit map before
> + * loading the reader thread quiescent state counters.
> + */
> + bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + while (bmap) {
> + j = __builtin_ctzl(bmap);
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
> + __func__, t, wait, bmap, id + j);
> + c = __atomic_load_n(
> + &v->qsbr_cnt[id + j].cnt,
> + __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, id+j);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + /* This thread might have unregistered.
> + * Re-read the bitmap.
> + */
> + bmap = __atomic_load_n(reg_thread_id,
> + __ATOMIC_ACQUIRE);
> +
> + continue;
> + }
> +
> + bmap &= ~(1UL << j);
> + }
> + }
> +
> + return 1;
> +}
> +
> +/* Check the quiescent state counter for all threads, assuming that
> + * all the threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i;
> + struct rte_rcu_qsbr_cnt *cnt;
> + uint64_t c;
> +
> + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> + __func__, t, wait, i);
> + while (1) {
> + c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, i);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
> + break;
> +
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + }
> + }
> +
> + return 1;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the reader threads have entered the quiescent state
> + * referenced by token.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * If this API is called with 'wait' set to true, the following
> + * factors must be considered:
> + *
> + * 1) If the calling thread is also reporting the status on the
> + * same QS variable, it must update the quiescent state status, before
> + * calling this API.
> + *
> + * 2) In addition, while calling from multiple threads, only
> + * one of those threads can be reporting the quiescent state status
> + * on a given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param t
> + * Token returned by rte_rcu_qsbr_start API
> + * @param wait
> + * If true, block till all the reader threads have completed entering
> + * the quiescent state referenced by token 't'.
> + * @return
> + * - 0 if all reader threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all reader threads have passed through specified number
> + * of quiescent states.
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + RTE_ASSERT(v != NULL);
> +
> + if (likely(v->num_threads == v->max_threads))
> + return __rcu_qsbr_check_all(v, t, wait);
> + else
> + return __rcu_qsbr_check_selective(v, t, wait);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Wait till the reader threads have entered quiescent state.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
> + * rte_rcu_qsbr_check APIs.
> + *
> + * If this API is called from multiple threads, only one of
> + * those threads can be reporting the quiescent state status on a
> + * given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Thread ID of the caller if it is registered to report quiescent state
> + * on this QS variable (i.e. the calling thread is also part of the
> + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> + */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Dump the details of a single QS variables to a file.
> + *
> + * It is NOT multi-thread safe.
> + *
> + * @param f
> + * A pointer to a file for output
> + * @param v
> + * QS variable
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - NULL parameters are passed
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_RCU_QSBR_H_ */
> diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
> new file mode 100644
> index 000000000..5ea8524db
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_version.map
> @@ -0,0 +1,12 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_rcu_qsbr_get_memsize;
> + rte_rcu_qsbr_init;
> + rte_rcu_qsbr_thread_register;
> + rte_rcu_qsbr_thread_unregister;
> + rte_rcu_qsbr_synchronize;
> + rte_rcu_qsbr_dump;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build
> index 595314d7d..67be10659 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'stack', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index abea16d48..ebe6d48a7 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
` (3 preceding siblings ...)
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-21 16:40 ` Thomas Monjalon
2019-04-21 16:40 ` Thomas Monjalon
2019-04-25 14:18 ` Honnappa Nagarahalli
4 siblings, 2 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-21 16:40 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
gavin.hu, dharmik.thakkar, malvika.gupta
17/04/2019 06:13, Honnappa Nagarahalli:
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (2):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
Sorry I cannot merge this library in DPDK 19.05-rc2
because of several issues:
- 32-bit compilation is broken because of %lx/%lu instead of PRI?64
- shared link is broken because of rcu_log_type not exported
- some public symbols (variable, macros, functions) are not prefixed with rte
I am not sure about getting it later in 19.05,
it may be too late to merge a new library.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-21 16:40 ` [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism Thomas Monjalon
@ 2019-04-21 16:40 ` Thomas Monjalon
2019-04-25 14:18 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-21 16:40 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
gavin.hu, dharmik.thakkar, malvika.gupta
17/04/2019 06:13, Honnappa Nagarahalli:
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (2):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
Sorry I cannot merge this library in DPDK 19.05-rc2
because of several issues:
- 32-bit compilation is broken because of %lx/%lu instead of PRI?64
- shared link is broken because of rcu_log_type not exported
- some public symbols (variable, macros, functions) are not prefixed with rte
I am not sure about getting it later in 19.05,
it may be too late to merge a new library.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-19 19:19 ` Paul E. McKenney
2019-04-19 19:19 ` Paul E. McKenney
@ 2019-04-23 1:08 ` Honnappa Nagarahalli
2019-04-23 1:08 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 1:08 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> On Tue, Apr 16, 2019 at 11:13:57PM -0500, Honnappa Nagarahalli wrote:
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Looks much better!
>
> One more suggestion below, on rte_rcu_qsbr_thread_offline().
>
> Thanx, Paul
>
<snip>
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > 000000000..73fa3354e
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > @@ -0,0 +1,629 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_RCU_QSBR_H_
> > +#define _RTE_RCU_QSBR_H_
> > +
<snip>
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a registered reader thread, to the list of threads reporting
> > +their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * Any registered reader thread that wants to report its quiescent
> > +state must
> > + * call this API before calling rte_rcu_qsbr_quiescent. This can be
> > +called
> > + * during initialization or as part of the packet processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * The reader thread must call rte_rcu_thread_online API, after the
> > +blocking
> > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > + * waits for the reader thread to update its quiescent state.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* Copy the current value of token.
> > + * The fence at the end of the function will ensure that
> > + * the following will not move down after the load of any shared
> > + * data structure.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > +
> > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > + * 'cnt' (64b) is accessed atomically.
> > + */
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + t, __ATOMIC_RELAXED);
> > +
> > + /* The subsequent load of the data structure should not
> > + * move above the store. Hence a store-load barrier
> > + * is required.
> > + * If the load of the data structure moves above the store,
> > + * writer might not see that the reader is online, even though
> > + * the reader is referencing the shared data structure.
> > + */
> > +#ifdef RTE_ARCH_X86_64
> > + /* rte_smp_mb() for x86 is lighter */
> > + rte_smp_mb();
> > +#else
> > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> > +#endif
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a registered reader thread from the list of threads
> > +reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * This can be called during initialization or as part of the packet
> > + * processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * rte_rcu_qsbr_check API will not wait for the reader thread with
> > + * this thread ID to report its quiescent state on the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
>
> I suggest adding an assertion that v->qsbr_cnt[thread_id].lock_cnt is equal to
> zero. This makes it easier to find a misplaced rte_rcu_qsbr_thread_offline().
> Similar situation as the assertion that you added to rte_rcu_qsbr_quiescent().
>
Agree, will add that. I think there is value in adding similar check to rte_rcu_qsbr_thread_online and rte_rcu_qsbr_thread_unregister as well.
Adding a check to rte_rcu_qsbr_thread_register does not hurt.
> > +
> > + /* The reader can go offline only after the load of the
> > + * data structure is completed. i.e. any load of the
> > + * data strcture can not move after this store.
> > + */
> > +
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE); }
> > +
<snip>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 1:08 ` Honnappa Nagarahalli
@ 2019-04-23 1:08 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 1:08 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> On Tue, Apr 16, 2019 at 11:13:57PM -0500, Honnappa Nagarahalli wrote:
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Looks much better!
>
> One more suggestion below, on rte_rcu_qsbr_thread_offline().
>
> Thanx, Paul
>
<snip>
> > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > b/lib/librte_rcu/rte_rcu_qsbr.h new file mode 100644 index
> > 000000000..73fa3354e
> > --- /dev/null
> > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > @@ -0,0 +1,629 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright (c) 2018 Arm Limited
> > + */
> > +
> > +#ifndef _RTE_RCU_QSBR_H_
> > +#define _RTE_RCU_QSBR_H_
> > +
<snip>
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a registered reader thread, to the list of threads reporting
> > +their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * Any registered reader thread that wants to report its quiescent
> > +state must
> > + * call this API before calling rte_rcu_qsbr_quiescent. This can be
> > +called
> > + * during initialization or as part of the packet processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * The reader thread must call rte_rcu_thread_online API, after the
> > +blocking
> > + * function call returns, to ensure that rte_rcu_qsbr_check API
> > + * waits for the reader thread to update its quiescent state.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * Reader thread with this thread ID will report its quiescent state on
> > + * the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + uint64_t t;
> > +
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> > +
> > + /* Copy the current value of token.
> > + * The fence at the end of the function will ensure that
> > + * the following will not move down after the load of any shared
> > + * data structure.
> > + */
> > + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> > +
> > + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> > + * 'cnt' (64b) is accessed atomically.
> > + */
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + t, __ATOMIC_RELAXED);
> > +
> > + /* The subsequent load of the data structure should not
> > + * move above the store. Hence a store-load barrier
> > + * is required.
> > + * If the load of the data structure moves above the store,
> > + * writer might not see that the reader is online, even though
> > + * the reader is referencing the shared data structure.
> > + */
> > +#ifdef RTE_ARCH_X86_64
> > + /* rte_smp_mb() for x86 is lighter */
> > + rte_smp_mb();
> > +#else
> > + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> > +#endif
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Remove a registered reader thread from the list of threads
> > +reporting their
> > + * quiescent state on a QS variable.
> > + *
> > + * This is implemented as a lock-free function. It is multi-thread
> > + * safe.
> > + *
> > + * This can be called during initialization or as part of the packet
> > + * processing loop.
> > + *
> > + * The reader thread must call rte_rcu_thread_offline API, before
> > + * calling any functions that block, to ensure that
> > +rte_rcu_qsbr_check
> > + * API does not wait indefinitely for the reader thread to update its QS.
> > + *
> > + * @param v
> > + * QS variable
> > + * @param thread_id
> > + * rte_rcu_qsbr_check API will not wait for the reader thread with
> > + * this thread ID to report its quiescent state on the QS variable.
> > + */
> > +static __rte_always_inline void __rte_experimental
> > +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int
> > +thread_id) {
> > + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
>
> I suggest adding an assertion that v->qsbr_cnt[thread_id].lock_cnt is equal to
> zero. This makes it easier to find a misplaced rte_rcu_qsbr_thread_offline().
> Similar situation as the assertion that you added to rte_rcu_qsbr_quiescent().
>
Agree, will add that. I think there is value in adding similar check to rte_rcu_qsbr_thread_online and rte_rcu_qsbr_thread_unregister as well.
Adding a check to rte_rcu_qsbr_thread_register does not hurt.
> > +
> > + /* The reader can go offline only after the load of the
> > + * data structure is completed. i.e. any load of the
> > + * data strcture can not move after this store.
> > + */
> > +
> > + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> > + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE); }
> > +
<snip>
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (11 preceding siblings ...)
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
` (3 more replies)
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
14 siblings, 4 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v7:
1) Library changes
a) Added macro RCU_IS_LOCK_CNT_ZERO
b) Added lock counter validation to rte_rcu_qsbr_thread_online/
rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
rte_rcu_qsbr_thread_unregister APIs (Paul)
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 268 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 639 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3399 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 " Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 1/3] rcu: " Honnappa Nagarahalli
` (2 subsequent siblings)
3 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v7:
1) Library changes
a) Added macro RCU_IS_LOCK_CNT_ZERO
b) Added lock counter validation to rte_rcu_qsbr_thread_online/
rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
rte_rcu_qsbr_thread_unregister APIs (Paul)
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (2):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 268 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 639 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
20 files changed, 3399 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 " Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
` (2 more replies)
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 3 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 268 ++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 639 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 964 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index a08583471..ae54f37db 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1274,6 +1274,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 7fb0dedb6..f50d26c30 100644
--- a/config/common_base
+++ b/config/common_base
@@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..50ef0ba9a
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,268 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t, id;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu, lock count = %u\n",
+ id + t,
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].cnt,
+ __ATOMIC_RELAXED),
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].lock_cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..58fb95ed0
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,639 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
+ if (v->qsbr_cnt[thread_id].lock_cnt) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args); \
+} while (0)
+#else
+#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
+ "Lock counter %u. Nested locks?\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..5ea8524db
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,12 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index abea16d48..ebe6d48a7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 8:10 ` Paul E. McKenney
2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
2 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 268 ++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 639 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 12 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 964 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index a08583471..ae54f37db 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1274,6 +1274,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 7fb0dedb6..f50d26c30 100644
--- a/config/common_base
+++ b/config/common_base
@@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..50ef0ba9a
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,268 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & RTE_QSBR_THRID_MASK;
+ i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t, id;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %lu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread ID mask = 0x");
+ for (i = 0; i < v->num_elems; i++)
+ fprintf(f, "%lx", __atomic_load_n(
+ RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE));
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %lu\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %lu, lock count = %u\n",
+ id + t,
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].cnt,
+ __ATOMIC_RELAXED),
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].lock_cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rcu_log_type = rte_log_register("lib.rcu");
+ if (rcu_log_type >= 0)
+ rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..58fb95ed0
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,639 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define RCU_DP_LOG(level, fmt, args...)
+#endif
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
+ if (v->qsbr_cnt[thread_id].lock_cnt) \
+ rte_log(RTE_LOG_ ## level, rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args); \
+} while (0)
+#else
+#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define RTE_QSBR_THRID_INDEX_SHIFT 6
+#define RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define RTE_QSBR_CNT_THR_OFFLINE 0
+#define RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
+ unsigned int thread_id __rte_unused)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
+ "Lock counter %u. Nested locks?\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ RCU_DP_LOG(DEBUG,
+ "%s: check: token = %lu, wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ RCU_DP_LOG(DEBUG,
+ "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..5ea8524db
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,12 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 595314d7d..67be10659 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index abea16d48..ebe6d48a7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 2/3] test/rcu_qsbr: add API and functional tests
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 " Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 1/3] rcu: " Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
3 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 +++++++++++++++++++++++
5 files changed, 1737 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..e3e566bce 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -136,7 +138,8 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..b16872de5
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..bb3b8e9b6
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,703 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 2/3] test/rcu_qsbr: add API and functional tests
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 703 +++++++++++++++++++++++
5 files changed, 1737 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index b28bed2d4..10f551ecb 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -217,6 +217,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 867cc5863..e3e566bce 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -110,6 +110,8 @@ test_sources = files('commands.c',
'test_timer_perf.c',
'test_timer_racecond.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -136,7 +138,8 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -175,6 +178,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -242,6 +246,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..b16872de5
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[1] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu1", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[2] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu2", sz,
+ RTE_CACHE_LINE_SIZE);
+ t[3] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu3", sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (RTE_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..bb3b8e9b6
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,703 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %ld\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %ld\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %lu\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %lu\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %lu\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 " Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
3 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 4:31 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 1/3] rcu: " Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
@ 2019-04-23 8:10 ` Paul E. McKenney
2019-04-23 8:10 ` Paul E. McKenney
2019-04-23 21:23 ` Honnappa Nagarahalli
2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
2 siblings, 2 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-23 8:10 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Much better!
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> ---
> MAINTAINERS | 5 +
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 ++
> lib/librte_rcu/meson.build | 7 +
> lib/librte_rcu/rte_rcu_qsbr.c | 268 ++++++++++++
> lib/librte_rcu/rte_rcu_qsbr.h | 639 +++++++++++++++++++++++++++++
> lib/librte_rcu/rte_rcu_version.map | 12 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 10 files changed, 964 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a08583471..ae54f37db 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1274,6 +1274,11 @@ F: examples/bpf/
> F: app/test/test_bpf.c
> F: doc/guides/prog_guide/bpf_lib.rst
>
> +RCU - EXPERIMENTAL
> +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> +F: lib/librte_rcu/
> +F: doc/guides/prog_guide/rcu_lib.rst
> +
>
> Test Applications
> -----------------
> diff --git a/config/common_base b/config/common_base
> index 7fb0dedb6..f50d26c30 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
> #
> CONFIG_RTE_LIBRTE_TELEMETRY=n
>
> +#
> +# Compile librte_rcu
> +#
> +CONFIG_RTE_LIBRTE_RCU=y
> +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> +
> #
> # Compile librte_lpm
> #
> diff --git a/lib/Makefile b/lib/Makefile
> index 26021d0c0..791e0d991 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
> +DEPDIRS-librte_rcu := librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
> diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
> new file mode 100644
> index 000000000..6aa677bd1
> --- /dev/null
> +++ b/lib/librte_rcu/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_rcu.a
> +
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> +LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_rcu_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> new file mode 100644
> index 000000000..0c2d5a2e0
> --- /dev/null
> +++ b/lib/librte_rcu/meson.build
> @@ -0,0 +1,7 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +allow_experimental_apis = true
> +
> +sources = files('rte_rcu_qsbr.c')
> +headers = files('rte_rcu_qsbr.h')
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..50ef0ba9a
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,268 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (max_threads == 0) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid max_threads %u\n",
> + __func__, max_threads);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + /* Add the size of the registered thread ID bitmap array */
> + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> +
> + return sz;
> +}
> +
> +/* Initialize a quiescent state variable */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (v == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = rte_rcu_qsbr_get_memsize(max_threads);
> + if (sz == 1)
> + return 1;
> +
> + /* Set all the threads to offline */
> + memset(v, 0, sz);
> + v->max_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +
> + return 0;
> +}
> +
> +/* Register a reader thread to report its quiescent state
> + * on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already registered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & 1UL << id)
> + return 0;
> +
> + do {
> + new_bmap = old_bmap | (1UL << id);
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_add(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & (1UL << id))
> + /* Someone else registered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Wait till the reader threads have entered quiescent state. */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + t = rte_rcu_qsbr_start(v);
> +
> + /* If the current thread has readside critical section,
> + * update its quiescent state status.
> + */
> + if (thread_id != RTE_QSBR_THRID_INVALID)
> + rte_rcu_qsbr_quiescent(v, thread_id);
> +
> + /* Wait for other readers to enter quiescent state */
> + rte_rcu_qsbr_check(v, t, true);
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t, id;
> +
> + if (v == NULL || f == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->max_threads));
> + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> + fprintf(f, " Current # threads = %u\n", v->num_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu, lock count = %u\n",
> + id + t,
> + __atomic_load_n(
> + &v->qsbr_cnt[id + t].cnt,
> + __ATOMIC_RELAXED),
> + __atomic_load_n(
> + &v->qsbr_cnt[id + t].lock_cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +
> + return 0;
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..58fb95ed0
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,639 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +#include <rte_atomic.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> +#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
> + if (v->qsbr_cnt[thread_id].lock_cnt) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args); \
> +} while (0)
> +#else
> +#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
> +#endif
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
> +#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline
> + * 64b counter is used to avoid adding more code to address
> + * counter overflow. Changing this to 32b would require additional
> + * changes to various APIs.
> + */
> + uint32_t lock_cnt;
> + /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/* RTE Quiescent State variable structure.
> + * This structure has two elements that vary in size based on the
> + * 'max_threads' parameter.
> + * 1) Quiescent state counter array
> + * 2) Register thread ID array
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple concurrent quiescent state queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t num_threads;
> + /**< Number of threads currently using this QS variable */
> + uint32_t max_threads;
> + /**< Maximum number of threads using this QS variable */
> +
> + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> + /**< Quiescent state counter array of 'max_threads' elements */
> +
> + /**< Registered thread IDs are stored in a bitmap array,
> + * after the quiescent state counter array.
> + */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * On success - size of memory in bytes required for this QS variable.
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0 or 'v' is NULL.
> + *
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable. thread_id is a value between 0 and (max_threads - 1).
> + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing quiescent state queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_quiescent. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its quiescent state.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> +#ifdef RTE_ARCH_X86_64
> + /* rte_smp_mb() for x86 is lighter */
> + rte_smp_mb();
> +#else
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a registered reader thread from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This can be called during initialization or as part of the packet
> + * processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * rte_rcu_qsbr_check API will not wait for the reader thread with
> + * this thread ID to report its quiescent state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + /* The reader can go offline only after the load of the
> + * data structure is completed. i.e. any load of the
> + * data strcture can not move after this store.
> + */
> +
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Acquire a lock for accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called before
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
> + * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
> + * rte_rcu_qsbr_check API will verify that this counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + /* Increment the lock counter */
> + __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_ACQUIRE);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Release a lock after accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called after
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
> + * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
> + * counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + /* Decrement the lock counter */
> + __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_RELEASE);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
> + "Lock counter %u. Nested locks?\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Ask the reader threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from worker threads.
> + *
> + * @param v
> + * QS variable
> + * @return
> + * - This is the token for this call of the API. This should be
> + * passed to rte_rcu_qsbr_check API.
> + */
> +static __rte_always_inline uint64_t __rte_experimental
> +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + /* Release the changes to the shared data structure.
> + * This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> +
> + return t;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for a reader thread.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the reader threads registered to report their quiescent state
> + * on the QS variable must call this API.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Update the quiescent state for the reader with this thread ID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + /* Acquire the changes to the shared data structure released
> + * by rte_rcu_qsbr_start.
> + * Later loads of the shared data structure should not move
> + * above this load. Hence, use load-acquire.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> +
> + /* Inform the writer that updates are visible to this reader.
> + * Prior loads of the shared data structure should not move
> + * beyond this store. Hence use store-release.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELEASE);
> +
> + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> + __func__, t, thread_id);
> +}
> +
> +/* Check the quiescent state counter for registered threads only, assuming
> + * that not all threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i, j, id;
> + uint64_t bmap;
> + uint64_t c;
> + uint64_t *reg_thread_id;
> +
> + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> + i < v->num_elems;
> + i++, reg_thread_id++) {
> + /* Load the current registered thread bit map before
> + * loading the reader thread quiescent state counters.
> + */
> + bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + while (bmap) {
> + j = __builtin_ctzl(bmap);
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
> + __func__, t, wait, bmap, id + j);
> + c = __atomic_load_n(
> + &v->qsbr_cnt[id + j].cnt,
> + __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, id+j);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + /* This thread might have unregistered.
> + * Re-read the bitmap.
> + */
> + bmap = __atomic_load_n(reg_thread_id,
> + __ATOMIC_ACQUIRE);
> +
> + continue;
> + }
> +
> + bmap &= ~(1UL << j);
> + }
> + }
> +
> + return 1;
> +}
> +
> +/* Check the quiescent state counter for all threads, assuming that
> + * all the threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i;
> + struct rte_rcu_qsbr_cnt *cnt;
> + uint64_t c;
> +
> + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> + __func__, t, wait, i);
> + while (1) {
> + c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, i);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
> + break;
> +
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + }
> + }
> +
> + return 1;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the reader threads have entered the quiescent state
> + * referenced by token.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * If this API is called with 'wait' set to true, the following
> + * factors must be considered:
> + *
> + * 1) If the calling thread is also reporting the status on the
> + * same QS variable, it must update the quiescent state status, before
> + * calling this API.
> + *
> + * 2) In addition, while calling from multiple threads, only
> + * one of those threads can be reporting the quiescent state status
> + * on a given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param t
> + * Token returned by rte_rcu_qsbr_start API
> + * @param wait
> + * If true, block till all the reader threads have completed entering
> + * the quiescent state referenced by token 't'.
> + * @return
> + * - 0 if all reader threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all reader threads have passed through specified number
> + * of quiescent states.
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + RTE_ASSERT(v != NULL);
> +
> + if (likely(v->num_threads == v->max_threads))
> + return __rcu_qsbr_check_all(v, t, wait);
> + else
> + return __rcu_qsbr_check_selective(v, t, wait);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Wait till the reader threads have entered quiescent state.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
> + * rte_rcu_qsbr_check APIs.
> + *
> + * If this API is called from multiple threads, only one of
> + * those threads can be reporting the quiescent state status on a
> + * given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Thread ID of the caller if it is registered to report quiescent state
> + * on this QS variable (i.e. the calling thread is also part of the
> + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> + */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Dump the details of a single QS variables to a file.
> + *
> + * It is NOT multi-thread safe.
> + *
> + * @param f
> + * A pointer to a file for output
> + * @param v
> + * QS variable
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - NULL parameters are passed
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_RCU_QSBR_H_ */
> diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
> new file mode 100644
> index 000000000..5ea8524db
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_version.map
> @@ -0,0 +1,12 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_rcu_qsbr_get_memsize;
> + rte_rcu_qsbr_init;
> + rte_rcu_qsbr_thread_register;
> + rte_rcu_qsbr_thread_unregister;
> + rte_rcu_qsbr_synchronize;
> + rte_rcu_qsbr_dump;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build
> index 595314d7d..67be10659 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'stack', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index abea16d48..ebe6d48a7 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 8:10 ` Paul E. McKenney
@ 2019-04-23 8:10 ` Paul E. McKenney
2019-04-23 21:23 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Paul E. McKenney @ 2019-04-23 8:10 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev, gavin.hu,
dharmik.thakkar, malvika.gupta
On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli wrote:
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so
> that the writers can free the memory associated with the lock less data
> structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Much better!
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> ---
> MAINTAINERS | 5 +
> config/common_base | 6 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 ++
> lib/librte_rcu/meson.build | 7 +
> lib/librte_rcu/rte_rcu_qsbr.c | 268 ++++++++++++
> lib/librte_rcu/rte_rcu_qsbr.h | 639 +++++++++++++++++++++++++++++
> lib/librte_rcu/rte_rcu_version.map | 12 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 10 files changed, 964 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a08583471..ae54f37db 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1274,6 +1274,11 @@ F: examples/bpf/
> F: app/test/test_bpf.c
> F: doc/guides/prog_guide/bpf_lib.rst
>
> +RCU - EXPERIMENTAL
> +M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> +F: lib/librte_rcu/
> +F: doc/guides/prog_guide/rcu_lib.rst
> +
>
> Test Applications
> -----------------
> diff --git a/config/common_base b/config/common_base
> index 7fb0dedb6..f50d26c30 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -834,6 +834,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
> #
> CONFIG_RTE_LIBRTE_TELEMETRY=n
>
> +#
> +# Compile librte_rcu
> +#
> +CONFIG_RTE_LIBRTE_RCU=y
> +CONFIG_RTE_LIBRTE_RCU_DEBUG=n
> +
> #
> # Compile librte_lpm
> #
> diff --git a/lib/Makefile b/lib/Makefile
> index 26021d0c0..791e0d991 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
> DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
> +DEPDIRS-librte_rcu := librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
> diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
> new file mode 100644
> index 000000000..6aa677bd1
> --- /dev/null
> +++ b/lib/librte_rcu/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_rcu.a
> +
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> +LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_rcu_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
> new file mode 100644
> index 000000000..0c2d5a2e0
> --- /dev/null
> +++ b/lib/librte_rcu/meson.build
> @@ -0,0 +1,7 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Arm Limited
> +
> +allow_experimental_apis = true
> +
> +sources = files('rte_rcu_qsbr.c')
> +headers = files('rte_rcu_qsbr.h')
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
> new file mode 100644
> index 000000000..50ef0ba9a
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> @@ -0,0 +1,268 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <errno.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_memory.h>
> +#include <rte_malloc.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_atomic.h>
> +#include <rte_per_lcore.h>
> +#include <rte_lcore.h>
> +#include <rte_errno.h>
> +
> +#include "rte_rcu_qsbr.h"
> +
> +/* Get the memory size of QSBR variable */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (max_threads == 0) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid max_threads %u\n",
> + __func__, max_threads);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = sizeof(struct rte_rcu_qsbr);
> +
> + /* Add the size of quiescent state counter array */
> + sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
> +
> + /* Add the size of the registered thread ID bitmap array */
> + sz += RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
> +
> + return sz;
> +}
> +
> +/* Initialize a quiescent state variable */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
> +{
> + size_t sz;
> +
> + if (v == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + sz = rte_rcu_qsbr_get_memsize(max_threads);
> + if (sz == 1)
> + return 1;
> +
> + /* Set all the threads to offline */
> + memset(v, 0, sz);
> + v->max_threads = max_threads;
> + v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE;
> + v->token = RTE_QSBR_CNT_INIT;
> +
> + return 0;
> +}
> +
> +/* Register a reader thread to report its quiescent state
> + * on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already registered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & 1UL << id)
> + return 0;
> +
> + do {
> + new_bmap = old_bmap | (1UL << id);
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_add(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & (1UL << id))
> + /* Someone else registered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE, __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
> +
> +/* Wait till the reader threads have entered quiescent state. */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + t = rte_rcu_qsbr_start(v);
> +
> + /* If the current thread has readside critical section,
> + * update its quiescent state status.
> + */
> + if (thread_id != RTE_QSBR_THRID_INVALID)
> + rte_rcu_qsbr_quiescent(v, thread_id);
> +
> + /* Wait for other readers to enter quiescent state */
> + rte_rcu_qsbr_check(v, t, true);
> +}
> +
> +/* Dump the details of a single quiescent state variable to a file. */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> +{
> + uint64_t bmap;
> + uint32_t i, t, id;
> +
> + if (v == NULL || f == NULL) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + fprintf(f, "\nQuiescent State Variable @%p\n", v);
> +
> + fprintf(f, " QS variable memory size = %lu\n",
> + rte_rcu_qsbr_get_memsize(v->max_threads));
> + fprintf(f, " Given # max threads = %u\n", v->max_threads);
> + fprintf(f, " Current # threads = %u\n", v->num_threads);
> +
> + fprintf(f, " Registered thread ID mask = 0x");
> + for (i = 0; i < v->num_elems; i++)
> + fprintf(f, "%lx", __atomic_load_n(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE));
> + fprintf(f, "\n");
> +
> + fprintf(f, " Token = %lu\n",
> + __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
> +
> + fprintf(f, "Quiescent State Counts for readers:\n");
> + for (i = 0; i < v->num_elems; i++) {
> + bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> + while (bmap) {
> + t = __builtin_ctzl(bmap);
> + fprintf(f, "thread ID = %d, count = %lu, lock count = %u\n",
> + id + t,
> + __atomic_load_n(
> + &v->qsbr_cnt[id + t].cnt,
> + __ATOMIC_RELAXED),
> + __atomic_load_n(
> + &v->qsbr_cnt[id + t].lock_cnt,
> + __ATOMIC_RELAXED));
> + bmap &= ~(1UL << t);
> + }
> + }
> +
> + return 0;
> +}
> +
> +int rcu_log_type;
> +
> +RTE_INIT(rte_rcu_register)
> +{
> + rcu_log_type = rte_log_register("lib.rcu");
> + if (rcu_log_type >= 0)
> + rte_log_set_level(rcu_log_type, RTE_LOG_ERR);
> +}
> diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
> new file mode 100644
> index 000000000..58fb95ed0
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> @@ -0,0 +1,639 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2018 Arm Limited
> + */
> +
> +#ifndef _RTE_RCU_QSBR_H_
> +#define _RTE_RCU_QSBR_H_
> +
> +/**
> + * @file
> + * RTE Quiescent State Based Reclamation (QSBR)
> + *
> + * Quiescent State (QS) is any point in the thread execution
> + * where the thread does not hold a reference to a data structure
> + * in shared memory. While using lock-less data structures, the writer
> + * can safely free memory once all the reader threads have entered
> + * quiescent state.
> + *
> + * This library provides the ability for the readers to report quiescent
> + * state and for the writers to identify when all the readers have
> + * entered quiescent state.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +#include <rte_lcore.h>
> +#include <rte_debug.h>
> +#include <rte_atomic.h>
> +
> +extern int rcu_log_type;
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +#define RCU_DP_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args)
> +#else
> +#define RCU_DP_LOG(level, fmt, args...)
> +#endif
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> +#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
> + if (v->qsbr_cnt[thread_id].lock_cnt) \
> + rte_log(RTE_LOG_ ## level, rcu_log_type, \
> + "%s(): " fmt "\n", __func__, ## args); \
> +} while (0)
> +#else
> +#define RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
> +#endif
> +
> +/* Registered thread IDs are stored as a bitmap of 64b element array.
> + * Given thread id needs to be converted to index into the array and
> + * the id within the array element.
> + */
> +#define RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
> +#define RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
> + RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
> + RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
> +#define RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
> + ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
> +#define RTE_QSBR_THRID_INDEX_SHIFT 6
> +#define RTE_QSBR_THRID_MASK 0x3f
> +#define RTE_QSBR_THRID_INVALID 0xffffffff
> +
> +/* Worker thread counter */
> +struct rte_rcu_qsbr_cnt {
> + uint64_t cnt;
> + /**< Quiescent state counter. Value 0 indicates the thread is offline
> + * 64b counter is used to avoid adding more code to address
> + * counter overflow. Changing this to 32b would require additional
> + * changes to various APIs.
> + */
> + uint32_t lock_cnt;
> + /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
> +} __rte_cache_aligned;
> +
> +#define RTE_QSBR_CNT_THR_OFFLINE 0
> +#define RTE_QSBR_CNT_INIT 1
> +
> +/* RTE Quiescent State variable structure.
> + * This structure has two elements that vary in size based on the
> + * 'max_threads' parameter.
> + * 1) Quiescent state counter array
> + * 2) Register thread ID array
> + */
> +struct rte_rcu_qsbr {
> + uint64_t token __rte_cache_aligned;
> + /**< Counter to allow for multiple concurrent quiescent state queries */
> +
> + uint32_t num_elems __rte_cache_aligned;
> + /**< Number of elements in the thread ID array */
> + uint32_t num_threads;
> + /**< Number of threads currently using this QS variable */
> + uint32_t max_threads;
> + /**< Maximum number of threads using this QS variable */
> +
> + struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
> + /**< Quiescent state counter array of 'max_threads' elements */
> +
> + /**< Registered thread IDs are stored in a bitmap array,
> + * after the quiescent state counter array.
> + */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the size of the memory occupied by a Quiescent State variable.
> + *
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * @return
> + * On success - size of memory in bytes required for this QS variable.
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0
> + */
> +size_t __rte_experimental
> +rte_rcu_qsbr_get_memsize(uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Initialize a Quiescent State (QS) variable.
> + *
> + * @param v
> + * QS variable
> + * @param max_threads
> + * Maximum number of threads reporting quiescent state on this variable.
> + * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - max_threads is 0 or 'v' is NULL.
> + *
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Register a reader thread to report its quiescent state
> + * on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + * Any reader thread that wants to report its quiescent state must
> + * call this API. This can be called during initialization or as part
> + * of the packet processing loop.
> + *
> + * Note that rte_rcu_qsbr_thread_online must be called before the
> + * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable. thread_id is a value between 0 and (max_threads - 1).
> + * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a reader thread, from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be called from the reader threads during shutdown.
> + * Ongoing quiescent state queries will stop waiting for the status from this
> + * unregistered reader thread.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will stop reporting its quiescent
> + * state on the QS variable.
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a registered reader thread, to the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * Any registered reader thread that wants to report its quiescent state must
> + * call this API before calling rte_rcu_qsbr_quiescent. This can be called
> + * during initialization or as part of the packet processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * The reader thread must call rte_rcu_thread_online API, after the blocking
> + * function call returns, to ensure that rte_rcu_qsbr_check API
> + * waits for the reader thread to update its quiescent state.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread with this thread ID will report its quiescent state on
> + * the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + /* Copy the current value of token.
> + * The fence at the end of the function will ensure that
> + * the following will not move down after the load of any shared
> + * data structure.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
> +
> + /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
> + * 'cnt' (64b) is accessed atomically.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELAXED);
> +
> + /* The subsequent load of the data structure should not
> + * move above the store. Hence a store-load barrier
> + * is required.
> + * If the load of the data structure moves above the store,
> + * writer might not see that the reader is online, even though
> + * the reader is referencing the shared data structure.
> + */
> +#ifdef RTE_ARCH_X86_64
> + /* rte_smp_mb() for x86 is lighter */
> + rte_smp_mb();
> +#else
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Remove a registered reader thread from the list of threads reporting their
> + * quiescent state on a QS variable.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This can be called during initialization or as part of the packet
> + * processing loop.
> + *
> + * The reader thread must call rte_rcu_thread_offline API, before
> + * calling any functions that block, to ensure that rte_rcu_qsbr_check
> + * API does not wait indefinitely for the reader thread to update its QS.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * rte_rcu_qsbr_check API will not wait for the reader thread with
> + * this thread ID to report its quiescent state on the QS variable.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + /* The reader can go offline only after the load of the
> + * data structure is completed. i.e. any load of the
> + * data strcture can not move after this store.
> + */
> +
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Acquire a lock for accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called before
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
> + * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
> + * rte_rcu_qsbr_check API will verify that this counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_lock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + /* Increment the lock counter */
> + __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_ACQUIRE);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Release a lock after accessing a shared data structure.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe.
> + *
> + * This API is provided to aid debugging. This should be called after
> + * accessing a shared data structure.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
> + * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
> + * counter is 0.
> + *
> + * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Reader thread id
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_unlock(struct rte_rcu_qsbr *v __rte_unused,
> + unsigned int thread_id __rte_unused)
> +{
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> +#if defined(RTE_LIBRTE_RCU_DEBUG)
> + /* Decrement the lock counter */
> + __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
> + 1, __ATOMIC_RELEASE);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
> + "Lock counter %u. Nested locks?\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +#endif
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Ask the reader threads to report the quiescent state
> + * status.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from worker threads.
> + *
> + * @param v
> + * QS variable
> + * @return
> + * - This is the token for this call of the API. This should be
> + * passed to rte_rcu_qsbr_check API.
> + */
> +static __rte_always_inline uint64_t __rte_experimental
> +rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL);
> +
> + /* Release the changes to the shared data structure.
> + * This store release will ensure that changes to any data
> + * structure are visible to the workers before the token
> + * update is visible.
> + */
> + t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
> +
> + return t;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Update quiescent state for a reader thread.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * All the reader threads registered to report their quiescent state
> + * on the QS variable must call this API.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Update the quiescent state for the reader with this thread ID.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
> +{
> + uint64_t t;
> +
> + RTE_ASSERT(v != NULL && thread_id < v->max_threads);
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + /* Acquire the changes to the shared data structure released
> + * by rte_rcu_qsbr_start.
> + * Later loads of the shared data structure should not move
> + * above this load. Hence, use load-acquire.
> + */
> + t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
> +
> + /* Inform the writer that updates are visible to this reader.
> + * Prior loads of the shared data structure should not move
> + * beyond this store. Hence use store-release.
> + */
> + __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
> + t, __ATOMIC_RELEASE);
> +
> + RCU_DP_LOG(DEBUG, "%s: update: token = %lu, Thread ID = %d",
> + __func__, t, thread_id);
> +}
> +
> +/* Check the quiescent state counter for registered threads only, assuming
> + * that not all threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i, j, id;
> + uint64_t bmap;
> + uint64_t c;
> + uint64_t *reg_thread_id;
> +
> + for (i = 0, reg_thread_id = RTE_QSBR_THRID_ARRAY_ELM(v, 0);
> + i < v->num_elems;
> + i++, reg_thread_id++) {
> + /* Load the current registered thread bit map before
> + * loading the reader thread quiescent state counters.
> + */
> + bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
> + id = i << RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + while (bmap) {
> + j = __builtin_ctzl(bmap);
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Bit Map = 0x%lx, Thread ID = %d",
> + __func__, t, wait, bmap, id + j);
> + c = __atomic_load_n(
> + &v->qsbr_cnt[id + j].cnt,
> + __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, id+j);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (unlikely(c != RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + /* This thread might have unregistered.
> + * Re-read the bitmap.
> + */
> + bmap = __atomic_load_n(reg_thread_id,
> + __ATOMIC_ACQUIRE);
> +
> + continue;
> + }
> +
> + bmap &= ~(1UL << j);
> + }
> + }
> +
> + return 1;
> +}
> +
> +/* Check the quiescent state counter for all threads, assuming that
> + * all the threads have registered.
> + */
> +static __rte_always_inline int
> +__rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + uint32_t i;
> + struct rte_rcu_qsbr_cnt *cnt;
> + uint64_t c;
> +
> + for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
> + RCU_DP_LOG(DEBUG,
> + "%s: check: token = %lu, wait = %d, Thread ID = %d",
> + __func__, t, wait, i);
> + while (1) {
> + c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
> + RCU_DP_LOG(DEBUG,
> + "%s: status: token = %lu, wait = %d, Thread QS cnt = %lu, Thread ID = %d",
> + __func__, t, wait, c, i);
> + /* Counter is not checked for wrap-around condition
> + * as it is a 64b counter.
> + */
> + if (likely(c == RTE_QSBR_CNT_THR_OFFLINE || c >= t))
> + break;
> +
> + /* This thread is not in quiescent state */
> + if (!wait)
> + return 0;
> +
> + rte_pause();
> + }
> + }
> +
> + return 1;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Checks if all the reader threads have entered the quiescent state
> + * referenced by token.
> + *
> + * This is implemented as a lock-free function. It is multi-thread
> + * safe and can be called from the worker threads as well.
> + *
> + * If this API is called with 'wait' set to true, the following
> + * factors must be considered:
> + *
> + * 1) If the calling thread is also reporting the status on the
> + * same QS variable, it must update the quiescent state status, before
> + * calling this API.
> + *
> + * 2) In addition, while calling from multiple threads, only
> + * one of those threads can be reporting the quiescent state status
> + * on a given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param t
> + * Token returned by rte_rcu_qsbr_start API
> + * @param wait
> + * If true, block till all the reader threads have completed entering
> + * the quiescent state referenced by token 't'.
> + * @return
> + * - 0 if all reader threads have NOT passed through specified number
> + * of quiescent states.
> + * - 1 if all reader threads have passed through specified number
> + * of quiescent states.
> + */
> +static __rte_always_inline int __rte_experimental
> +rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
> +{
> + RTE_ASSERT(v != NULL);
> +
> + if (likely(v->num_threads == v->max_threads))
> + return __rcu_qsbr_check_all(v, t, wait);
> + else
> + return __rcu_qsbr_check_selective(v, t, wait);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Wait till the reader threads have entered quiescent state.
> + *
> + * This is implemented as a lock-free function. It is multi-thread safe.
> + * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
> + * rte_rcu_qsbr_check APIs.
> + *
> + * If this API is called from multiple threads, only one of
> + * those threads can be reporting the quiescent state status on a
> + * given QS variable.
> + *
> + * @param v
> + * QS variable
> + * @param thread_id
> + * Thread ID of the caller if it is registered to report quiescent state
> + * on this QS variable (i.e. the calling thread is also part of the
> + * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
> + */
> +void __rte_experimental
> +rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Dump the details of a single QS variables to a file.
> + *
> + * It is NOT multi-thread safe.
> + *
> + * @param f
> + * A pointer to a file for output
> + * @param v
> + * QS variable
> + * @return
> + * On success - 0
> + * On error - 1 with error code set in rte_errno.
> + * Possible rte_errno codes are:
> + * - EINVAL - NULL parameters are passed
> + */
> +int __rte_experimental
> +rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_RCU_QSBR_H_ */
> diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
> new file mode 100644
> index 000000000..5ea8524db
> --- /dev/null
> +++ b/lib/librte_rcu/rte_rcu_version.map
> @@ -0,0 +1,12 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_rcu_qsbr_get_memsize;
> + rte_rcu_qsbr_init;
> + rte_rcu_qsbr_thread_register;
> + rte_rcu_qsbr_thread_unregister;
> + rte_rcu_qsbr_synchronize;
> + rte_rcu_qsbr_dump;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build
> index 595314d7d..67be10659 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'stack', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index abea16d48..ebe6d48a7 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
> _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
> _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
> _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
> _LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 8:10 ` Paul E. McKenney
2019-04-23 8:10 ` Paul E. McKenney
@ 2019-04-23 21:23 ` Honnappa Nagarahalli
2019-04-23 21:23 ` Honnappa Nagarahalli
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 21:23 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli,
bruce.richardson, nd, thomas, nd
>
> On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli wrote:
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Much better!
>
> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
>
Thanks a lot, appreciate your feedback.
Any views from maintainers on including this library into RC3? IMO, this library is independent and should not affect existing code.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 21:23 ` Honnappa Nagarahalli
@ 2019-04-23 21:23 ` Honnappa Nagarahalli
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-23 21:23 UTC (permalink / raw)
To: paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli,
bruce.richardson, nd, thomas, nd
>
> On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli wrote:
> > Add RCU library supporting quiescent state based memory reclamation
> method.
> > This library helps identify the quiescent state of the reader threads
> > so that the writers can free the memory associated with the lock less
> > data structures.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> Much better!
>
> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
>
Thanks a lot, appreciate your feedback.
Any views from maintainers on including this library into RC3? IMO, this library is independent and should not affect existing code.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 1/3] rcu: " Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 8:10 ` Paul E. McKenney
@ 2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
2 siblings, 1 reply; 260+ messages in thread
From: Ruifeng Wang (Arm Technology China) @ 2019-04-24 10:03 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
Hi Honnappa,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Tuesday, April 23, 2019 12:31
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR
> mechanism
>
*snip*
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> +thread_id) {
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
If I understand correctly, here should be (!(old_bmap & 1UL << id))
Can you please check?
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE,
> __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
Same comment as previous.
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
*snip*
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
@ 2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
0 siblings, 0 replies; 260+ messages in thread
From: Ruifeng Wang (Arm Technology China) @ 2019-04-24 10:03 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
Hi Honnappa,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Tuesday, April 23, 2019 12:31
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR
> mechanism
>
*snip*
> +int __rte_experimental
> +rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int
> +thread_id) {
> + unsigned int i, id, success;
> + uint64_t old_bmap, new_bmap;
> +
> + if (v == NULL || thread_id >= v->max_threads) {
> + rte_log(RTE_LOG_ERR, rcu_log_type,
> + "%s(): Invalid input parameter\n", __func__);
> + rte_errno = EINVAL;
> +
> + return 1;
> + }
> +
> + RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
> + v->qsbr_cnt[thread_id].lock_cnt);
> +
> + id = thread_id & RTE_QSBR_THRID_MASK;
> + i = thread_id >> RTE_QSBR_THRID_INDEX_SHIFT;
> +
> + /* Make sure that the counter for registered threads does not
> + * go out of sync. Hence, additional checks are required.
> + */
> + /* Check if the thread is already unregistered */
> + old_bmap = __atomic_load_n(RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + __ATOMIC_RELAXED);
> + if (old_bmap & ~(1UL << id))
If I understand correctly, here should be (!(old_bmap & 1UL << id))
Can you please check?
> + return 0;
> +
> + do {
> + new_bmap = old_bmap & ~(1UL << id);
> + /* Make sure any loads of the shared data structure are
> + * completed before removal of the thread from the list of
> + * reporting threads.
> + */
> + success = __atomic_compare_exchange(
> + RTE_QSBR_THRID_ARRAY_ELM(v, i),
> + &old_bmap, &new_bmap, 0,
> + __ATOMIC_RELEASE,
> __ATOMIC_RELAXED);
> +
> + if (success)
> + __atomic_fetch_sub(&v->num_threads,
> + 1, __ATOMIC_RELAXED);
> + else if (old_bmap & ~(1UL << id))
Same comment as previous.
> + /* Someone else unregistered this thread.
> + * Counter should not be incremented.
> + */
> + return 0;
> + } while (success == 0);
> +
> + return 0;
> +}
*snip*
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
@ 2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
1 sibling, 1 reply; 260+ messages in thread
From: Ruifeng Wang (Arm Technology China) @ 2019-04-24 10:12 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
Hi Honnappa,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Tuesday, April 23, 2019 12:32
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation
>
*snip*
> +Let us consider the following diagram:
> +
> +.. figure:: img/rcu_general_info.*
> +
> +
> +As shown, reader thread 1 accesses data structures D1 and D2. When it
> +is accessing D1, if the writer has to remove an element from D1, the
> +writer cannot free the memory associated with that element immediately.
> +The writer can return the memory to the allocator only after the reader
> +stops referencing D1. In other words, reader thread RT1 has to enter a
> +quiescent state.
> +
> +Similarly, since reader thread 2 is also accessing D1, writer has to
> +wait till thread 2 enters quiescent state as well.
> +
> +However, the writer does not need to wait for reader thread 3 to enter
> +quiescent state. Reader thread 3 was not accessing D1 when the delete
> +operation happened. So, reader thread 1 will not have a reference to
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ thread 3 ?
> +the deleted entry.
> +
*snip*
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation
2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
@ 2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
0 siblings, 0 replies; 260+ messages in thread
From: Ruifeng Wang (Arm Technology China) @ 2019-04-24 10:12 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
Hi Honnappa,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Tuesday, April 23, 2019 12:32
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation
>
*snip*
> +Let us consider the following diagram:
> +
> +.. figure:: img/rcu_general_info.*
> +
> +
> +As shown, reader thread 1 accesses data structures D1 and D2. When it
> +is accessing D1, if the writer has to remove an element from D1, the
> +writer cannot free the memory associated with that element immediately.
> +The writer can return the memory to the allocator only after the reader
> +stops referencing D1. In other words, reader thread RT1 has to enter a
> +quiescent state.
> +
> +Similarly, since reader thread 2 is also accessing D1, writer has to
> +wait till thread 2 enters quiescent state as well.
> +
> +However, the writer does not need to wait for reader thread 3 to enter
> +quiescent state. Reader thread 3 was not accessing D1 when the delete
> +operation happened. So, reader thread 1 will not have a reference to
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ thread 3 ?
> +the deleted entry.
> +
*snip*
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-23 21:23 ` Honnappa Nagarahalli
2019-04-23 21:23 ` Honnappa Nagarahalli
@ 2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
2019-04-25 5:15 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-04-24 20:02 UTC (permalink / raw)
To: Honnappa Nagarahalli, paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, bruce.richardson, nd, thomas, nd
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Wednesday, April 24, 2019 2:53 AM
> To: paulmck@linux.ibm.com
> Cc: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> marko.kovacevic@intel.com; dev@dpdk.org; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>;
> Malvika Gupta <Malvika.Gupta@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; bruce.richardson@intel.com; nd
> <nd@arm.com>; thomas@monjalon.net; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR
> mechanism
>
> >
> > On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli wrote:
> > > Add RCU library supporting quiescent state based memory reclamation
> > method.
> > > This library helps identify the quiescent state of the reader
> > > threads so that the writers can free the memory associated with the
> > > lock less data structures.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >
> > Much better!
> >
> > Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> >
> Thanks a lot, appreciate your feedback.
>
> Any views from maintainers on including this library into RC3? IMO, this library is
> independent and should not affect existing code.
Tested rcu_qsbr_autotest and rcu_qsbr_perf_autotest UT on a armv8.2 machine(octeontx2).
Found rcu_qsbr_perf_autotest() runs successfully on 24 cores.
There is come issue with rcu_qsbr_autotest on 24 cores. It works fine upto 20 cores.
Please find below the success log, failure log and core dump.
[master][dpdk.org] $ echo "rcu_qsbr_autotest" | sudo ./build/app/test -c 0xfffff0
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_autotest
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_get_memsize(): Invalid max_threads 0
Test rte_rcu_qsbr_init()
rte_rcu_qsbr_init(): Invalid input parameter
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
Test rte_rcu_qsbr_thread_unregister()
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
Test rte_rcu_qsbr_start()
Test rte_rcu_qsbr_check()
Test rte_rcu_qsbr_synchronize()
Test rte_rcu_qsbr_dump()
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 0
Registered thread ID mask = 0x00
Token = 1
Quiescent State Counts for readers:
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 1
Registered thread ID mask = 0x200
Token = 1
Quiescent State Counts for readers:
thread ID = 5, count = 0, lock count = 0
Quiescent State Variable @0x13ff8ff00
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 2
Registered thread ID mask = 0xc00
Token = 1
Quiescent State Counts for readers:
thread ID = 6, count = 0, lock count = 0
thread ID = 7, count = 0, lock count = 0
Test rte_rcu_qsbr_thread_online()
Test rte_rcu_qsbr_thread_offline()
Functional tests
Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries
Test: 8 writers, 4 QSBR variable, simultaneous QSBR queries
Test OK
[master] [dpdk.org] $ echo "rcu_qsbr_autotest" | sudo ./build/app/test -c 0xffffff
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_autotest
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_get_memsize(): Invalid max_threads 0
Test rte_rcu_qsbr_init()
rte_rcu_qsbr_init(): Invalid input parameter
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
Test rte_rcu_qsbr_thread_unregister()
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
Test rte_rcu_qsbr_start()
Test rte_rcu_qsbr_check()
Test rte_rcu_qsbr_synchronize()
Test rte_rcu_qsbr_dump()
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 0
Registered thread ID mask = 0x00
Token = 1
Quiescent State Counts for readers:
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 1
Registered thread ID mask = 0x20
Token = 1
Quiescent State Counts for readers:
thread ID = 1, count = 0, lock count = 0
Quiescent State Variable @0x13ff8ff00
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 2
Registered thread ID mask = 0xc0
Token = 1
Quiescent State Counts for readers:
thread ID = 2, count = 0, lock count = 0
thread ID = 3, count = 0, lock count = 0
Test rte_rcu_qsbr_thread_online()
Test rte_rcu_qsbr_thread_offline()
Functional tests
Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries
Test: 10 writers, 5 QSBR variable, simultaneous QSBR queries
rte_rcu_qsbr_init(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
Segmentation fault
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Core was generated by `./build/app/test -c 0xffffff'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 rte_rcu_qsbr_thread_online (thread_id=<optimized out>, v=0x0) at /home/jerin/dpdk.org/build/include/rte_rcu_qsbr.h:238
/home/jerin/dpdk.org/lib/librte_rcu/rte_rcu_qsbr.h:238:7712:beg:0x564df4
[Current thread is 1 (Thread 0xffff9c06d900 (LWP 1938))]
(gdb) bt
#0 rte_rcu_qsbr_thread_online (thread_id=<optimized out>, v=0x0) at /home/jerin/dpdk.org/build/include/rte_rcu_qsbr.h:238
#1 test_rcu_qsbr_reader (arg=<optimized out>) at /home/jerin/dpdk.org/app/test/test_rcu_qsbr.c:641
#2 0x0000000000652430 in eal_thread_loop (arg=<optimized out>) at /home/jerin/dpdk.org/lib/librte_eal/linux/eal/eal_thread.c:153
#3 0x0000ffffa13a756c in start_thread () from /usr/lib/libpthread.so.0
#4 0x0000ffffa11e301c in thread_start () from /usr/lib/libc.so.6
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
@ 2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
2019-04-25 5:15 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-04-24 20:02 UTC (permalink / raw)
To: Honnappa Nagarahalli, paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, bruce.richardson, nd, thomas, nd
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Wednesday, April 24, 2019 2:53 AM
> To: paulmck@linux.ibm.com
> Cc: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> marko.kovacevic@intel.com; dev@dpdk.org; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>;
> Malvika Gupta <Malvika.Gupta@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; bruce.richardson@intel.com; nd
> <nd@arm.com>; thomas@monjalon.net; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR
> mechanism
>
> >
> > On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli wrote:
> > > Add RCU library supporting quiescent state based memory reclamation
> > method.
> > > This library helps identify the quiescent state of the reader
> > > threads so that the writers can free the memory associated with the
> > > lock less data structures.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >
> > Much better!
> >
> > Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> >
> Thanks a lot, appreciate your feedback.
>
> Any views from maintainers on including this library into RC3? IMO, this library is
> independent and should not affect existing code.
Tested rcu_qsbr_autotest and rcu_qsbr_perf_autotest UT on a armv8.2 machine(octeontx2).
Found rcu_qsbr_perf_autotest() runs successfully on 24 cores.
There is come issue with rcu_qsbr_autotest on 24 cores. It works fine upto 20 cores.
Please find below the success log, failure log and core dump.
[master][dpdk.org] $ echo "rcu_qsbr_autotest" | sudo ./build/app/test -c 0xfffff0
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_autotest
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_get_memsize(): Invalid max_threads 0
Test rte_rcu_qsbr_init()
rte_rcu_qsbr_init(): Invalid input parameter
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
Test rte_rcu_qsbr_thread_unregister()
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
Test rte_rcu_qsbr_start()
Test rte_rcu_qsbr_check()
Test rte_rcu_qsbr_synchronize()
Test rte_rcu_qsbr_dump()
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 0
Registered thread ID mask = 0x00
Token = 1
Quiescent State Counts for readers:
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 1
Registered thread ID mask = 0x200
Token = 1
Quiescent State Counts for readers:
thread ID = 5, count = 0, lock count = 0
Quiescent State Variable @0x13ff8ff00
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 2
Registered thread ID mask = 0xc00
Token = 1
Quiescent State Counts for readers:
thread ID = 6, count = 0, lock count = 0
thread ID = 7, count = 0, lock count = 0
Test rte_rcu_qsbr_thread_online()
Test rte_rcu_qsbr_thread_offline()
Functional tests
Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries
Test: 8 writers, 4 QSBR variable, simultaneous QSBR queries
Test OK
[master] [dpdk.org] $ echo "rcu_qsbr_autotest" | sudo ./build/app/test -c 0xffffff
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_autotest
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_get_memsize(): Invalid max_threads 0
Test rte_rcu_qsbr_init()
rte_rcu_qsbr_init(): Invalid input parameter
Test rte_rcu_qsbr_thread_register()
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
Test rte_rcu_qsbr_thread_unregister()
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
rte_rcu_qsbr_thread_unregister(): Invalid input parameter
Test rte_rcu_qsbr_start()
Test rte_rcu_qsbr_check()
Test rte_rcu_qsbr_synchronize()
Test rte_rcu_qsbr_dump()
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
rte_rcu_qsbr_dump(): Invalid input parameter
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 0
Registered thread ID mask = 0x00
Token = 1
Quiescent State Counts for readers:
Quiescent State Variable @0x13ff94100
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 1
Registered thread ID mask = 0x20
Token = 1
Quiescent State Counts for readers:
thread ID = 1, count = 0, lock count = 0
Quiescent State Variable @0x13ff8ff00
QS variable memory size = 16768
Given # max threads = 128
Current # threads = 2
Registered thread ID mask = 0xc0
Token = 1
Quiescent State Counts for readers:
thread ID = 2, count = 0, lock count = 0
thread ID = 3, count = 0, lock count = 0
Test rte_rcu_qsbr_thread_online()
Test rte_rcu_qsbr_thread_offline()
Functional tests
Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries
Test: 10 writers, 5 QSBR variable, simultaneous QSBR queries
rte_rcu_qsbr_init(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
rte_rcu_qsbr_thread_register(): Invalid input parameter
Segmentation fault
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Core was generated by `./build/app/test -c 0xffffff'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 rte_rcu_qsbr_thread_online (thread_id=<optimized out>, v=0x0) at /home/jerin/dpdk.org/build/include/rte_rcu_qsbr.h:238
/home/jerin/dpdk.org/lib/librte_rcu/rte_rcu_qsbr.h:238:7712:beg:0x564df4
[Current thread is 1 (Thread 0xffff9c06d900 (LWP 1938))]
(gdb) bt
#0 rte_rcu_qsbr_thread_online (thread_id=<optimized out>, v=0x0) at /home/jerin/dpdk.org/build/include/rte_rcu_qsbr.h:238
#1 test_rcu_qsbr_reader (arg=<optimized out>) at /home/jerin/dpdk.org/app/test/test_rcu_qsbr.c:641
#2 0x0000000000652430 in eal_thread_loop (arg=<optimized out>) at /home/jerin/dpdk.org/lib/librte_eal/linux/eal/eal_thread.c:153
#3 0x0000ffffa13a756c in start_thread () from /usr/lib/libpthread.so.0
#4 0x0000ffffa11e301c in thread_start () from /usr/lib/libc.so.6
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
@ 2019-04-25 5:15 ` Honnappa Nagarahalli
2019-04-25 5:15 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-25 5:15 UTC (permalink / raw)
To: jerinj, paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, bruce.richardson, nd, thomas, nd
> > >
> > > On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli
> wrote:
> > > > Add RCU library supporting quiescent state based memory
> > > > reclamation
> > > method.
> > > > This library helps identify the quiescent state of the reader
> > > > threads so that the writers can free the memory associated with
> > > > the lock less data structures.
> > > >
> > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > >
> > > Much better!
> > >
> > > Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> > >
> > Thanks a lot, appreciate your feedback.
> >
> > Any views from maintainers on including this library into RC3? IMO,
> > this library is independent and should not affect existing code.
>
> Tested rcu_qsbr_autotest and rcu_qsbr_perf_autotest UT on a armv8.2
> machine(octeontx2).
> Found rcu_qsbr_perf_autotest() runs successfully on 24 cores.
> There is come issue with rcu_qsbr_autotest on 24 cores. It works fine upto 20
> cores.
Thanks Jerin for running the test cases. I could reproduce the issue. It is a test case issue, will fix in next version.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] rcu: add RCU library supporting QSBR mechanism
2019-04-25 5:15 ` Honnappa Nagarahalli
@ 2019-04-25 5:15 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-25 5:15 UTC (permalink / raw)
To: jerinj, paulmck
Cc: konstantin.ananyev, stephen, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, bruce.richardson, nd, thomas, nd
> > >
> > > On Mon, Apr 22, 2019 at 11:31:28PM -0500, Honnappa Nagarahalli
> wrote:
> > > > Add RCU library supporting quiescent state based memory
> > > > reclamation
> > > method.
> > > > This library helps identify the quiescent state of the reader
> > > > threads so that the writers can free the memory associated with
> > > > the lock less data structures.
> > > >
> > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > Reviewed-by: Steve Capper <steve.capper@arm.com>
> > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> > > > Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> > > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > >
> > > Much better!
> > >
> > > Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> > >
> > Thanks a lot, appreciate your feedback.
> >
> > Any views from maintainers on including this library into RC3? IMO,
> > this library is independent and should not affect existing code.
>
> Tested rcu_qsbr_autotest and rcu_qsbr_perf_autotest UT on a armv8.2
> machine(octeontx2).
> Found rcu_qsbr_perf_autotest() runs successfully on 24 cores.
> There is come issue with rcu_qsbr_autotest on 24 cores. It works fine upto 20
> cores.
Thanks Jerin for running the test cases. I could reproduce the issue. It is a test case issue, will fix in next version.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-21 16:40 ` [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism Thomas Monjalon
2019-04-21 16:40 ` Thomas Monjalon
@ 2019-04-25 14:18 ` Honnappa Nagarahalli
2019-04-25 14:18 ` Honnappa Nagarahalli
` (2 more replies)
1 sibling, 3 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-25 14:18 UTC (permalink / raw)
To: thomas
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> 17/04/2019 06:13, Honnappa Nagarahalli:
> > Dharmik Thakkar (1):
> > test/rcu_qsbr: add API and functional tests
> >
> > Honnappa Nagarahalli (2):
> > rcu: add RCU library supporting QSBR mechanism
> > doc/rcu: add lib_rcu documentation
>
> Sorry I cannot merge this library in DPDK 19.05-rc2 because of several issues:
Apologies, we will improve our internal CI to add some of these compilations.
> - 32-bit compilation is broken because of %lx/%lu instead of PRI?64
I am able to reproduce this issue. However, I am not able to run the application. Following is the log on x86, does anyone know what is happening?
EAL: Detected 28 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Some devices want iova as va but pa will be used because.. EAL: vfio-noiommu mode configured
EAL: few device bound to UIO
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Cannot get a virtual area: Cannot allocate memory
EAL: Cannot reserve memory
EAL: Cannot allocate VA space for memseg list, retrying with different page size
EAL: Cannot allocate VA space on socket 0
EAL: FATAL: Cannot init memory
EAL: Cannot init memory
> - shared link is broken because of rcu_log_type not exported
I am not sure what this is. Is this shared library compilation? Can you please let me know how to reproduce this?
> - some public symbols (variable, macros, functions) are not prefixed with rte
For my understanding, are you referring to the following symbols?
rcu_log_type
RCU_DP_LOG - This is internal to the library, will change it to __RTE_RCU_DP_LOG
RCU_IS_LOCK_CNT_ZERO - Same as above
__rcu_qsbr_check_selective
__rcu_qsbr_check_all
>
> I am not sure about getting it later in 19.05, it may be too late to merge a
> new library.
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-25 14:18 ` Honnappa Nagarahalli
@ 2019-04-25 14:18 ` Honnappa Nagarahalli
2019-04-25 14:27 ` Honnappa Nagarahalli
2019-04-25 14:38 ` David Marchand
2 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-25 14:18 UTC (permalink / raw)
To: thomas
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> 17/04/2019 06:13, Honnappa Nagarahalli:
> > Dharmik Thakkar (1):
> > test/rcu_qsbr: add API and functional tests
> >
> > Honnappa Nagarahalli (2):
> > rcu: add RCU library supporting QSBR mechanism
> > doc/rcu: add lib_rcu documentation
>
> Sorry I cannot merge this library in DPDK 19.05-rc2 because of several issues:
Apologies, we will improve our internal CI to add some of these compilations.
> - 32-bit compilation is broken because of %lx/%lu instead of PRI?64
I am able to reproduce this issue. However, I am not able to run the application. Following is the log on x86, does anyone know what is happening?
EAL: Detected 28 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Some devices want iova as va but pa will be used because.. EAL: vfio-noiommu mode configured
EAL: few device bound to UIO
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Cannot get a virtual area: Cannot allocate memory
EAL: Cannot reserve memory
EAL: Cannot allocate VA space for memseg list, retrying with different page size
EAL: Cannot allocate VA space on socket 0
EAL: FATAL: Cannot init memory
EAL: Cannot init memory
> - shared link is broken because of rcu_log_type not exported
I am not sure what this is. Is this shared library compilation? Can you please let me know how to reproduce this?
> - some public symbols (variable, macros, functions) are not prefixed with rte
For my understanding, are you referring to the following symbols?
rcu_log_type
RCU_DP_LOG - This is internal to the library, will change it to __RTE_RCU_DP_LOG
RCU_IS_LOCK_CNT_ZERO - Same as above
__rcu_qsbr_check_selective
__rcu_qsbr_check_all
>
> I am not sure about getting it later in 19.05, it may be too late to merge a
> new library.
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-25 14:18 ` Honnappa Nagarahalli
2019-04-25 14:18 ` Honnappa Nagarahalli
@ 2019-04-25 14:27 ` Honnappa Nagarahalli
2019-04-25 14:27 ` Honnappa Nagarahalli
2019-04-25 14:38 ` David Marchand
2 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-25 14:27 UTC (permalink / raw)
To: thomas
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> >
> > 17/04/2019 06:13, Honnappa Nagarahalli:
> > > Dharmik Thakkar (1):
> > > test/rcu_qsbr: add API and functional tests
> > >
> > > Honnappa Nagarahalli (2):
> > > rcu: add RCU library supporting QSBR mechanism
> > > doc/rcu: add lib_rcu documentation
> >
> > Sorry I cannot merge this library in DPDK 19.05-rc2 because of several
> issues:
> Apologies, we will improve our internal CI to add some of these compilations.
>
> > - 32-bit compilation is broken because of %lx/%lu instead of PRI?64
> I am able to reproduce this issue. However, I am not able to run the
> application. Following is the log on x86, does anyone know what is happening?
>
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Some devices want iova as va but pa will be used because.. EAL: vfio-
> noiommu mode configured
> EAL: few device bound to UIO
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Cannot get a virtual area: Cannot allocate memory
> EAL: Cannot reserve memory
> EAL: Cannot allocate VA space for memseg list, retrying with different page
> size
> EAL: Cannot allocate VA space on socket 0
> EAL: FATAL: Cannot init memory
> EAL: Cannot init memory
>
This is resolved. It needed '--legacy-mem' option (thanks Maxime).
> > - shared link is broken because of rcu_log_type not exported
> I am not sure what this is. Is this shared library compilation? Can you please
> let me know how to reproduce this?
>
> > - some public symbols (variable, macros, functions) are not prefixed
> > with rte
> For my understanding, are you referring to the following symbols?
> rcu_log_type
> RCU_DP_LOG - This is internal to the library, will change it to
> __RTE_RCU_DP_LOG RCU_IS_LOCK_CNT_ZERO - Same as above
> __rcu_qsbr_check_selective __rcu_qsbr_check_all
>
> >
> > I am not sure about getting it later in 19.05, it may be too late to
> > merge a new library.
> >
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-25 14:27 ` Honnappa Nagarahalli
@ 2019-04-25 14:27 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-25 14:27 UTC (permalink / raw)
To: thomas
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> >
> > 17/04/2019 06:13, Honnappa Nagarahalli:
> > > Dharmik Thakkar (1):
> > > test/rcu_qsbr: add API and functional tests
> > >
> > > Honnappa Nagarahalli (2):
> > > rcu: add RCU library supporting QSBR mechanism
> > > doc/rcu: add lib_rcu documentation
> >
> > Sorry I cannot merge this library in DPDK 19.05-rc2 because of several
> issues:
> Apologies, we will improve our internal CI to add some of these compilations.
>
> > - 32-bit compilation is broken because of %lx/%lu instead of PRI?64
> I am able to reproduce this issue. However, I am not able to run the
> application. Following is the log on x86, does anyone know what is happening?
>
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Some devices want iova as va but pa will be used because.. EAL: vfio-
> noiommu mode configured
> EAL: few device bound to UIO
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Cannot get a virtual area: Cannot allocate memory
> EAL: Cannot reserve memory
> EAL: Cannot allocate VA space for memseg list, retrying with different page
> size
> EAL: Cannot allocate VA space on socket 0
> EAL: FATAL: Cannot init memory
> EAL: Cannot init memory
>
This is resolved. It needed '--legacy-mem' option (thanks Maxime).
> > - shared link is broken because of rcu_log_type not exported
> I am not sure what this is. Is this shared library compilation? Can you please
> let me know how to reproduce this?
>
> > - some public symbols (variable, macros, functions) are not prefixed
> > with rte
> For my understanding, are you referring to the following symbols?
> rcu_log_type
> RCU_DP_LOG - This is internal to the library, will change it to
> __RTE_RCU_DP_LOG RCU_IS_LOCK_CNT_ZERO - Same as above
> __rcu_qsbr_check_selective __rcu_qsbr_check_all
>
> >
> > I am not sure about getting it later in 19.05, it may be too late to
> > merge a new library.
> >
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-25 14:18 ` Honnappa Nagarahalli
2019-04-25 14:18 ` Honnappa Nagarahalli
2019-04-25 14:27 ` Honnappa Nagarahalli
@ 2019-04-25 14:38 ` David Marchand
2019-04-25 14:38 ` David Marchand
2 siblings, 1 reply; 260+ messages in thread
From: David Marchand @ 2019-04-25 14:38 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: thomas, dev, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Thu, Apr 25, 2019 at 4:19 PM Honnappa Nagarahalli <
Honnappa.Nagarahalli@arm.com> wrote:
> > - shared link is broken because of rcu_log_type not exported
> I am not sure what this is. Is this shared library compilation? Can you
> please let me know how to reproduce this?
>
> > - some public symbols (variable, macros, functions) are not prefixed
> with rte
> For my understanding, are you referring to the following symbols?
> rcu_log_type
> RCU_DP_LOG - This is internal to the library, will change it to
> __RTE_RCU_DP_LOG
> RCU_IS_LOCK_CNT_ZERO - Same as above
> __rcu_qsbr_check_selective
> __rcu_qsbr_check_all
>
Afaiu, those are exposed via exported symbols, so they are part of the api.
You need to prefix them.
Had a quick look at the code, I suppose enabling rcu debug and shared
library will trigger the link issue Thomas reported.
--
David Marchand
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-25 14:38 ` David Marchand
@ 2019-04-25 14:38 ` David Marchand
0 siblings, 0 replies; 260+ messages in thread
From: David Marchand @ 2019-04-25 14:38 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: thomas, dev, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Thu, Apr 25, 2019 at 4:19 PM Honnappa Nagarahalli <
Honnappa.Nagarahalli@arm.com> wrote:
> > - shared link is broken because of rcu_log_type not exported
> I am not sure what this is. Is this shared library compilation? Can you
> please let me know how to reproduce this?
>
> > - some public symbols (variable, macros, functions) are not prefixed
> with rte
> For my understanding, are you referring to the following symbols?
> rcu_log_type
> RCU_DP_LOG - This is internal to the library, will change it to
> __RTE_RCU_DP_LOG
> RCU_IS_LOCK_CNT_ZERO - Same as above
> __rcu_qsbr_check_selective
> __rcu_qsbr_check_all
>
Afaiu, those are exposed via exported symbols, so they are part of the api.
You need to prefix them.
Had a quick look at the code, I suppose enabling rcu debug and shared
library will trigger the link issue Thomas reported.
--
David Marchand
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (12 preceding siblings ...)
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 " Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
` (5 more replies)
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
14 siblings, 6 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v8:
1) Library changes
a) Symbols prefixed with '__RTE' or 'rte_' as required (Thomas)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Fixed shared library compilation (Thomas)
2) Test cases
a) Fixed segmentation fault when more than 20 cores are used for testing (Jerin)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Testing done on x86, ThunderX2, Octeon TX, BlueField for 32b(x86 only)/64b,
debug/non-debug, shared/static linking, meson/makefile with various
number of cores
Patch v7:
1) Library changes
a) Added macro RCU_IS_LOCK_CNT_ZERO
b) Added lock counter validation to rte_rcu_qsbr_thread_online/
rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
rte_rcu_qsbr_thread_unregister APIs (Paul)
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (3):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
doc: added RCU to the release notes
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
doc/guides/rel_notes/release_19_05.rst | 8 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
21 files changed, 3420 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 1/4] rcu: " Honnappa Nagarahalli
` (4 subsequent siblings)
5 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v8:
1) Library changes
a) Symbols prefixed with '__RTE' or 'rte_' as required (Thomas)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Fixed shared library compilation (Thomas)
2) Test cases
a) Fixed segmentation fault when more than 20 cores are used for testing (Jerin)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Testing done on x86, ThunderX2, Octeon TX, BlueField for 32b(x86 only)/64b,
debug/non-debug, shared/static linking, meson/makefile with various
number of cores
Patch v7:
1) Library changes
a) Added macro RCU_IS_LOCK_CNT_ZERO
b) Added lock counter validation to rte_rcu_qsbr_thread_online/
rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
rte_rcu_qsbr_thread_unregister APIs (Paul)
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (3):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
doc: added RCU to the release notes
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
doc/guides/rel_notes/release_19_05.rst | 8 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
21 files changed, 3420 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
` (2 more replies)
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
` (3 subsequent siblings)
5 siblings, 3 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 976 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 4493aa636..5d25b21f0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1281,6 +1281,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 4236c2a67..6b96e0e80 100644
--- a/config/common_base
+++ b/config/common_base
@@ -838,6 +838,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..b4ed01045
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += __RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = __RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t, id;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %zu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread IDs = ");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "%d ", id + t);
+
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %"PRIu64"\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %"PRIu64", lock count = %u\n",
+ id + t,
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].cnt,
+ __ATOMIC_RELAXED),
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].lock_cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rte_rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rte_rcu_log_type = rte_log_register("lib.rcu");
+ if (rte_rcu_log_type >= 0)
+ rte_log_set_level(rte_rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..9727f4922
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,641 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rte_rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define __RTE_RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define __RTE_RCU_DP_LOG(level, fmt, args...)
+#endif
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
+ if (v->qsbr_cnt[thread_id].lock_cnt) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args); \
+} while (0)
+#else
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define __RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define __RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define __RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define __RTE_QSBR_THRID_INDEX_SHIFT 6
+#define __RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define __RTE_QSBR_CNT_THR_OFFLINE 0
+#define __RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ __RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
+ "Lock counter %u. Nested locks?\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ __RTE_RCU_DP_LOG(DEBUG, "%s: update: token = %"PRIu64", Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = __RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Bit Map = 0x%"PRIx64", Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c !=
+ __RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == __RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rte_rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rte_rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..795c400fd
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,13 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_log_type;
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index a379dd682..885ef0b61 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f020bb10c..7c9b4b538 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 1/4] rcu: " Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 976 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 4493aa636..5d25b21f0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1281,6 +1281,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 4236c2a67..6b96e0e80 100644
--- a/config/common_base
+++ b/config/common_base
@@ -838,6 +838,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..b4ed01045
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += __RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = __RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t, id;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %zu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread IDs = ");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "%d ", id + t);
+
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %"PRIu64"\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %"PRIu64", lock count = %u\n",
+ id + t,
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].cnt,
+ __ATOMIC_RELAXED),
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].lock_cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rte_rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rte_rcu_log_type = rte_log_register("lib.rcu");
+ if (rte_rcu_log_type >= 0)
+ rte_log_set_level(rte_rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..9727f4922
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,641 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rte_rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define __RTE_RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define __RTE_RCU_DP_LOG(level, fmt, args...)
+#endif
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
+ if (v->qsbr_cnt[thread_id].lock_cnt) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args); \
+} while (0)
+#else
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define __RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define __RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define __RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define __RTE_QSBR_THRID_INDEX_SHIFT 6
+#define __RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define __RTE_QSBR_CNT_THR_OFFLINE 0
+#define __RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ __RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
+ "Lock counter %u. Nested locks?\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ __RTE_RCU_DP_LOG(DEBUG, "%s: update: token = %"PRIu64", Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = __RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Bit Map = 0x%"PRIx64", Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c !=
+ __RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == __RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rte_rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rte_rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..795c400fd
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,13 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_log_type;
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_dump;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index a379dd682..885ef0b61 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost', 'rcu',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f020bb10c..7c9b4b538 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 1/4] rcu: " Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-29 20:35 ` Thomas Monjalon
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
` (2 subsequent siblings)
5 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 +++++++++++++++++++++++
5 files changed, 1738 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 54f706792..68d6b4fbc 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -218,6 +218,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 80cdea5d1..dd48b35a2 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -111,6 +111,8 @@ test_sources = files('commands.c',
'test_timer_racecond.c',
'test_timer_secondary.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -137,7 +139,8 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -176,6 +179,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -243,6 +247,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..ed6934a47
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+/* Make sure that this has the same value as __RTE_QSBR_CNT_INIT */
+#define TEST_RCU_QSBR_CNT_INIT 1
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ int i;
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ t[i] = (struct rte_rcu_qsbr *)rte_zmalloc(NULL, sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..16a43f8db
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,704 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-29 20:35 ` Thomas Monjalon
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 7 +-
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 +++++++++++++++++++++++
5 files changed, 1738 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 54f706792..68d6b4fbc 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -218,6 +218,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 80cdea5d1..dd48b35a2 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -111,6 +111,8 @@ test_sources = files('commands.c',
'test_timer_racecond.c',
'test_timer_secondary.c',
'test_ticketlock.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_version.c',
'virtual_pmd.c'
)
@@ -137,7 +139,8 @@ test_deps = ['acl',
'reorder',
'ring',
'stack',
- 'timer'
+ 'timer',
+ 'rcu'
]
# All test cases in fast_parallel_test_names list are parallel
@@ -176,6 +179,7 @@ fast_parallel_test_names = [
'ring_autotest',
'ring_pmd_autotest',
'rwlock_autotest',
+ 'rcu_qsbr_autotest',
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
@@ -243,6 +247,7 @@ perf_test_names = [
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
'stack_nb_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..ed6934a47
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+/* Make sure that this has the same value as __RTE_QSBR_CNT_INIT */
+#define TEST_RCU_QSBR_CNT_INIT 1
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ int i;
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ t[i] = (struct rte_rcu_qsbr *)rte_zmalloc(NULL, sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..16a43f8db
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,704 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 3/4] doc/rcu: add lib_rcu documentation
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:40 ` [dpdk-dev] [PATCH v8 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
2019-04-26 12:04 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
5 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 3/4] doc/rcu: add lib_rcu documentation
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-26 4:39 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:39 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 4/4] doc: added RCU to the release notes
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (3 preceding siblings ...)
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-04-26 4:40 ` Honnappa Nagarahalli
2019-04-26 4:40 ` Honnappa Nagarahalli
2019-04-26 12:04 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
5 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:40 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Added RCU library addition to the release notes
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/rel_notes/release_19_05.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index d5ed564ab..687c01bc1 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -68,6 +68,13 @@ New Features
Added a new lock-free stack handler, which uses the newly added stack
library.
+* **Added RCU library.**
+
+ Added RCU library supporting quiescent state based memory reclamation method.
+ This library helps identify the quiescent state of the reader threads so
+ that the writers can free the memory associated with the lock free data
+ structures.
+
* **Updated KNI module and PMD.**
Updated the KNI kernel module to set the max_mtu according to the given
@@ -330,6 +337,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_port.so.3
librte_power.so.1
librte_rawdev.so.1
+ + librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
librte_sched.so.2
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v8 4/4] doc: added RCU to the release notes
2019-04-26 4:40 ` [dpdk-dev] [PATCH v8 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
@ 2019-04-26 4:40 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-26 4:40 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Added RCU library addition to the release notes
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/rel_notes/release_19_05.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index d5ed564ab..687c01bc1 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -68,6 +68,13 @@ New Features
Added a new lock-free stack handler, which uses the newly added stack
library.
+* **Added RCU library.**
+
+ Added RCU library supporting quiescent state based memory reclamation method.
+ This library helps identify the quiescent state of the reader threads so
+ that the writers can free the memory associated with the lock free data
+ structures.
+
* **Updated KNI module and PMD.**
Updated the KNI kernel module to set the max_mtu according to the given
@@ -330,6 +337,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_port.so.3
librte_power.so.1
librte_rawdev.so.1
+ + librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
librte_sched.so.2
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 1/4] rcu: " Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
@ 2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2 siblings, 1 reply; 260+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-04-26 8:13 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Friday, April 26, 2019 10:10 AM
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com;
> dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR
> mechanism
>
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so that the
> writers can free the memory associated with the lock less data structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
Tested using UT on a armv8.2 24 cores machine(octeontx2)
Tested-by: Jerin Jacob <jerinj@marvell.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
@ 2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 260+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-04-26 8:13 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Friday, April 26, 2019 10:10 AM
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com;
> dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR
> mechanism
>
> Add RCU library supporting quiescent state based memory reclamation method.
> This library helps identify the quiescent state of the reader threads so that the
> writers can free the memory associated with the lock less data structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
Tested using UT on a armv8.2 24 cores machine(octeontx2)
Tested-by: Jerin Jacob <jerinj@marvell.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
` (4 preceding siblings ...)
2019-04-26 4:40 ` [dpdk-dev] [PATCH v8 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
@ 2019-04-26 12:04 ` Ananyev, Konstantin
2019-04-26 12:04 ` Ananyev, Konstantin
5 siblings, 1 reply; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-26 12:04 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:honnappa.nagarahalli@arm.com]
> Sent: Friday, April 26, 2019 5:40 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com; dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism
>
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
> In the following paras, the term 'memory' refers to memory allocated
> by typical APIs like malloc or anything that is representative of
> memory, for ex: an index of a free element array.
>
> Since these data structures are lock less, the writers and readers
> are accessing the data structures concurrently. Hence, while removing
> an element from a data structure, the writers cannot return the memory
> to the allocator, without knowing that the readers are not
> referencing that element/memory anymore. Hence, it is required to
> separate the operation of removing an element into 2 steps:
>
> Delete: in this step, the writer removes the reference to the element from
> the data structure but does not return the associated memory to the
> allocator. This will ensure that new readers will not get a reference to
> the removed element. Removing the reference is an atomic operation.
>
> Free(Reclaim): in this step, the writer returns the memory to the
> memory allocator, only after knowing that all the readers have stopped
> referencing the deleted element.
>
> This library helps the writer determine when it is safe to free the
> memory.
>
> This library makes use of thread Quiescent State (QS). QS can be
> defined as 'any point in the thread execution where the thread does
> not hold a reference to shared memory'. It is upto the application to
> determine its quiescent state. Let us consider the following diagram:
>
> Time -------------------------------------------------->
>
> | |
> RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
> | |
> RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
> | |
> RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
> | |
> |<--->|
> Del | Free
> |
> Cannot free memory
> during this period
> (Grace Period)
>
> RTx - Reader thread
> < and > - Start and end of while(1) loop
> ***Dx*** - Reader thread is accessing the shared data structure Dx.
> i.e. critical section.
> +++ - Reader thread is not accessing any shared data structure.
> i.e. non critical section or quiescent state.
> Del - Point in time when the reference to the entry is removed using
> atomic operation.
> Free - Point in time when the writer can free the entry.
> Grace Period - Time duration between Del and Free, during which memory cannot
> be freed.
>
> As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
> accessing D2, if the writer has to remove an element from D2, the
> writer cannot free the memory associated with that element immediately.
> The writer can return the memory to the allocator only after the reader
> stops referencing D2. In other words, reader thread RT1 has to enter
> a quiescent state.
>
> Similarly, since thread RT3 is also accessing D2, writer has to wait till
> RT3 enters quiescent state as well.
>
> However, the writer does not need to wait for RT2 to enter quiescent state.
> Thread RT2 was not accessing D2 when the delete operation happened.
> So, RT2 will not get a reference to the deleted entry.
>
> It can be noted that, the critical sections for D2 and D3 are quiescent states
> for D1. i.e. for a given data structure Dx, any point in the thread execution
> that does not reference Dx is a quiescent state.
>
> Since memory is not freed immediately, there might be a need for
> provisioning of additional memory, depending on the application requirements.
>
> It is important to make sure that this library keeps the overhead of
> identifying the end of grace period and subsequent freeing of memory,
> to a minimum. The following paras explain how grace period and critical
> section affect this overhead.
>
> The writer has to poll the readers to identify the end of grace period.
> Polling introduces memory accesses and wastes CPU cycles. The memory
> is not available for reuse during grace period. Longer grace periods
> exasperate these conditions.
>
> The length of the critical section and the number of reader threads
> is proportional to the duration of the grace period. Keeping the critical
> sections smaller will keep the grace period smaller. However, keeping the
> critical sections smaller requires additional CPU cycles(due to additional
> reporting) in the readers.
>
> Hence, we need the characteristics of small grace period and large critical
> section. This library addresses this by allowing the writer to do
> other work without having to block till the readers report their quiescent
> state.
>
> For DPDK applications, the start and end of while(1) loop (where no
> references to shared data structures are kept) act as perfect quiescent
> states. This will combine all the shared data structure accesses into a
> single, large critical section which helps keep the overhead on the
> reader side to a minimum.
>
> DPDK supports pipeline model of packet processing and service cores.
> In these use cases, a given data structure may not be used by all the
> workers in the application. The writer does not have to wait for all
> the workers to report their quiescent state. To provide the required
> flexibility, this library has a concept of QS variable. The application
> can create one QS variable per data structure to help it track the
> end of grace period for each data structure. This helps keep the grace
> period to a minimum.
>
> The application has to allocate memory and initialize a QS variable.
>
> Application can call rte_rcu_qsbr_get_memsize to calculate the size
> of memory to allocate. This API takes maximum number of reader threads,
> using this variable, as a parameter. Currently, a maximum of 1024 threads
> are supported.
>
> Further, the application can initialize a QS variable using the API
> rte_rcu_qsbr_init.
>
> Each reader thread is assumed to have a unique thread ID. Currently, the
> management of the thread ID (for ex: allocation/free) is left to the
> application. The thread ID should be in the range of 0 to
> maximum number of threads provided while creating the QS variable.
> The application could also use lcore_id as the thread ID where applicable.
>
> rte_rcu_qsbr_thread_register API will register a reader thread
> to report its quiescent state. This can be called from a reader thread.
> A control plane thread can also call this on behalf of a reader thread.
> The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
> its quiescent state.
>
> Some of the use cases might require the reader threads to make
> blocking API calls (for ex: while using eventdev APIs). The writer thread
> should not wait for such reader threads to enter quiescent state.
> The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
> blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
> API call returns.
>
> The writer thread can trigger the reader threads to report their quiescent
> state by calling the API rte_rcu_qsbr_start. It is possible for multiple
> writer threads to query the quiescent state status simultaneously. Hence,
> rte_rcu_qsbr_start returns a token to each caller.
>
> The writer thread has to call rte_rcu_qsbr_check API with the token to get the
> current quiescent state status. Option to block till all the reader threads
> enter the quiescent state is provided. If this API indicates that all the
> reader threads have entered the quiescent state, the application can free the
> deleted entry.
>
> The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
> can be called concurrently from multiple writers even while running
> as worker threads.
>
> The separation of triggering the reporting from querying the status provides
> the writer threads flexibility to do useful work instead of blocking for the
> reader threads to enter the quiescent state or go offline. This reduces the
> memory accesses due to continuous polling for the status.
>
> rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
> and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
> threads to report their quiescent state and polls till all the readers enter
> the quiescent state or go offline. This API does not allow the writer to
> do useful work while waiting and also introduces additional memory accesses
> due to continuous polling.
>
> The reader thread must call rte_rcu_qsbr_thread_offline and
> rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
> quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
> thread to report the quiescent state status anymore.
>
> The reader threads should call rte_rcu_qsbr_update API to indicate that they
> entered a quiescent state. This API checks if a writer has triggered a
> quiescent state query and update the state accordingly.
>
> Patch v8:
> 1) Library changes
> a) Symbols prefixed with '__RTE' or 'rte_' as required (Thomas)
> b) Used PRI?64 macros to support 32b compilation (Thomas)
> c) Fixed shared library compilation (Thomas)
> 2) Test cases
> a) Fixed segmentation fault when more than 20 cores are used for testing (Jerin)
> b) Used PRI?64 macros to support 32b compilation (Thomas)
> c) Testing done on x86, ThunderX2, Octeon TX, BlueField for 32b(x86 only)/64b,
> debug/non-debug, shared/static linking, meson/makefile with various
> number of cores
>
> Patch v7:
> 1) Library changes
> a) Added macro RCU_IS_LOCK_CNT_ZERO
> b) Added lock counter validation to rte_rcu_qsbr_thread_online/
> rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
> rte_rcu_qsbr_thread_unregister APIs (Paul)
>
> Patch v6:
> 1) Library changes
> a) Fixed and tested meson build on Arm and x86 (Konstantin)
> b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
>
> Patch v5:
> 1) Library changes
> a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
> c) Clarified the need for 64b counters (Paul)
> 2) Test cases
> a) Added additional performance test cases to benchmark
> __rcu_qsbr_check_all
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
> 3) Documentation
> a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
>
> Patch v4:
> 1) Library changes
> a) Fixed the compilation issue on x86 (Konstantin)
> b) Rebased with latest master
>
> Patch v3:
> 1) Library changes
> a) Moved the registered thread ID array to the end of the
> structure (Konstantin)
> b) Removed the compile time constant RTE_RCU_MAX_THREADS
> c) Added code to keep track of registered number of threads
>
> Patch v2:
> 1) Library changes
> a) Corrected the RTE_ASSERT checks (Konstantin)
> b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
> c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
> d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
> e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
> f) Removed the macro to access the thread QS counters (Konstantin)
> 2) Test cases
> a) Added additional test cases for removing RTE_ASSERT
> 3) Documentation
> a) Changed the figure to make it bigger (Marko)
> b) Spelling and format corrections (Marko)
>
> Patch v1:
> 1) Library changes
> a) Changed the maximum number of reader threads to 1024
> b) Renamed rte_rcu_qsbr_register/unregister_thread to
> rte_rcu_qsbr_thread_register/unregister
> c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
> version of rte_rcu_qsbr_thread_register/unregister API. These
> also provide the flexibility for performance when the requested
> maximum number of threads is higher than the current number of
> threads.
> d) Corrected memory orderings in rte_rcu_qsbr_update
> e) Changed the signature of rte_rcu_qsbr_start API to return the token
> f) Changed the signature of rte_rcu_qsbr_start API to not take the
> expected number of QS states to wait.
> g) Added debug logs
> h) Added API and programmer guide documentation.
>
> RFC v3:
> 1) Library changes
> a) Rebased to latest master
> b) Added new API rte_rcu_qsbr_get_memsize
> c) Add support for memory allocation for QSBR variable (Konstantin)
> d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
> 2) Testcase changes
> a) Separated stress tests into a performance test case file
> b) Added performance statistics
>
> RFC v2:
> 1) Cover letter changes
> a) Explian the parameters that affect the overhead of using RCU
> and their effect
> b) Explain how this library addresses these effects to keep
> the overhead to minimum
> 2) Library changes
> a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
> b) Simplify the code/remove APIs to keep this library inline with
> other synchronisation mechanisms like locks (Konstantin)
> c) Change the design to support more than 64 threads (Konstantin)
> d) Fixed version map to remove static inline functions
> 3) Testcase changes
> a) Add boundary and additional functional test cases
> b) Add stress test cases (Paul E. McKenney)
>
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (3):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
> doc: added RCU to the release notes
>
> MAINTAINERS | 5 +
> app/test/Makefile | 2 +
> app/test/autotest_data.py | 12 +
> app/test/meson.build | 7 +-
> app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
> app/test/test_rcu_qsbr_perf.c | 704 ++++++++++++
> config/common_base | 6 +
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf.in | 1 +
> .../prog_guide/img/rcu_general_info.svg | 509 +++++++++
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/rcu_lib.rst | 185 +++
> doc/guides/rel_notes/release_19_05.rst | 8 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 +
> lib/librte_rcu/meson.build | 7 +
> lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++
> lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++
> lib/librte_rcu/rte_rcu_version.map | 13 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 21 files changed, 3420 insertions(+), 3 deletions(-)
> create mode 100644 app/test/test_rcu_qsbr.c
> create mode 100644 app/test/test_rcu_qsbr_perf.c
> create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
> create mode 100644 doc/guides/prog_guide/rcu_lib.rst
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> --
Run UT on my box (SKX) for both x86_64 and i686 over 96 cores.
All passed.
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-04-26 12:04 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
@ 2019-04-26 12:04 ` Ananyev, Konstantin
0 siblings, 0 replies; 260+ messages in thread
From: Ananyev, Konstantin @ 2019-04-26 12:04 UTC (permalink / raw)
To: Honnappa Nagarahalli, stephen, paulmck, Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:honnappa.nagarahalli@arm.com]
> Sent: Friday, April 26, 2019 5:40 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com; dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism
>
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
> In the following paras, the term 'memory' refers to memory allocated
> by typical APIs like malloc or anything that is representative of
> memory, for ex: an index of a free element array.
>
> Since these data structures are lock less, the writers and readers
> are accessing the data structures concurrently. Hence, while removing
> an element from a data structure, the writers cannot return the memory
> to the allocator, without knowing that the readers are not
> referencing that element/memory anymore. Hence, it is required to
> separate the operation of removing an element into 2 steps:
>
> Delete: in this step, the writer removes the reference to the element from
> the data structure but does not return the associated memory to the
> allocator. This will ensure that new readers will not get a reference to
> the removed element. Removing the reference is an atomic operation.
>
> Free(Reclaim): in this step, the writer returns the memory to the
> memory allocator, only after knowing that all the readers have stopped
> referencing the deleted element.
>
> This library helps the writer determine when it is safe to free the
> memory.
>
> This library makes use of thread Quiescent State (QS). QS can be
> defined as 'any point in the thread execution where the thread does
> not hold a reference to shared memory'. It is upto the application to
> determine its quiescent state. Let us consider the following diagram:
>
> Time -------------------------------------------------->
>
> | |
> RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
> | |
> RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
> | |
> RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
> | |
> |<--->|
> Del | Free
> |
> Cannot free memory
> during this period
> (Grace Period)
>
> RTx - Reader thread
> < and > - Start and end of while(1) loop
> ***Dx*** - Reader thread is accessing the shared data structure Dx.
> i.e. critical section.
> +++ - Reader thread is not accessing any shared data structure.
> i.e. non critical section or quiescent state.
> Del - Point in time when the reference to the entry is removed using
> atomic operation.
> Free - Point in time when the writer can free the entry.
> Grace Period - Time duration between Del and Free, during which memory cannot
> be freed.
>
> As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
> accessing D2, if the writer has to remove an element from D2, the
> writer cannot free the memory associated with that element immediately.
> The writer can return the memory to the allocator only after the reader
> stops referencing D2. In other words, reader thread RT1 has to enter
> a quiescent state.
>
> Similarly, since thread RT3 is also accessing D2, writer has to wait till
> RT3 enters quiescent state as well.
>
> However, the writer does not need to wait for RT2 to enter quiescent state.
> Thread RT2 was not accessing D2 when the delete operation happened.
> So, RT2 will not get a reference to the deleted entry.
>
> It can be noted that, the critical sections for D2 and D3 are quiescent states
> for D1. i.e. for a given data structure Dx, any point in the thread execution
> that does not reference Dx is a quiescent state.
>
> Since memory is not freed immediately, there might be a need for
> provisioning of additional memory, depending on the application requirements.
>
> It is important to make sure that this library keeps the overhead of
> identifying the end of grace period and subsequent freeing of memory,
> to a minimum. The following paras explain how grace period and critical
> section affect this overhead.
>
> The writer has to poll the readers to identify the end of grace period.
> Polling introduces memory accesses and wastes CPU cycles. The memory
> is not available for reuse during grace period. Longer grace periods
> exasperate these conditions.
>
> The length of the critical section and the number of reader threads
> is proportional to the duration of the grace period. Keeping the critical
> sections smaller will keep the grace period smaller. However, keeping the
> critical sections smaller requires additional CPU cycles(due to additional
> reporting) in the readers.
>
> Hence, we need the characteristics of small grace period and large critical
> section. This library addresses this by allowing the writer to do
> other work without having to block till the readers report their quiescent
> state.
>
> For DPDK applications, the start and end of while(1) loop (where no
> references to shared data structures are kept) act as perfect quiescent
> states. This will combine all the shared data structure accesses into a
> single, large critical section which helps keep the overhead on the
> reader side to a minimum.
>
> DPDK supports pipeline model of packet processing and service cores.
> In these use cases, a given data structure may not be used by all the
> workers in the application. The writer does not have to wait for all
> the workers to report their quiescent state. To provide the required
> flexibility, this library has a concept of QS variable. The application
> can create one QS variable per data structure to help it track the
> end of grace period for each data structure. This helps keep the grace
> period to a minimum.
>
> The application has to allocate memory and initialize a QS variable.
>
> Application can call rte_rcu_qsbr_get_memsize to calculate the size
> of memory to allocate. This API takes maximum number of reader threads,
> using this variable, as a parameter. Currently, a maximum of 1024 threads
> are supported.
>
> Further, the application can initialize a QS variable using the API
> rte_rcu_qsbr_init.
>
> Each reader thread is assumed to have a unique thread ID. Currently, the
> management of the thread ID (for ex: allocation/free) is left to the
> application. The thread ID should be in the range of 0 to
> maximum number of threads provided while creating the QS variable.
> The application could also use lcore_id as the thread ID where applicable.
>
> rte_rcu_qsbr_thread_register API will register a reader thread
> to report its quiescent state. This can be called from a reader thread.
> A control plane thread can also call this on behalf of a reader thread.
> The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
> its quiescent state.
>
> Some of the use cases might require the reader threads to make
> blocking API calls (for ex: while using eventdev APIs). The writer thread
> should not wait for such reader threads to enter quiescent state.
> The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
> blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
> API call returns.
>
> The writer thread can trigger the reader threads to report their quiescent
> state by calling the API rte_rcu_qsbr_start. It is possible for multiple
> writer threads to query the quiescent state status simultaneously. Hence,
> rte_rcu_qsbr_start returns a token to each caller.
>
> The writer thread has to call rte_rcu_qsbr_check API with the token to get the
> current quiescent state status. Option to block till all the reader threads
> enter the quiescent state is provided. If this API indicates that all the
> reader threads have entered the quiescent state, the application can free the
> deleted entry.
>
> The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
> can be called concurrently from multiple writers even while running
> as worker threads.
>
> The separation of triggering the reporting from querying the status provides
> the writer threads flexibility to do useful work instead of blocking for the
> reader threads to enter the quiescent state or go offline. This reduces the
> memory accesses due to continuous polling for the status.
>
> rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
> and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
> threads to report their quiescent state and polls till all the readers enter
> the quiescent state or go offline. This API does not allow the writer to
> do useful work while waiting and also introduces additional memory accesses
> due to continuous polling.
>
> The reader thread must call rte_rcu_qsbr_thread_offline and
> rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
> quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
> thread to report the quiescent state status anymore.
>
> The reader threads should call rte_rcu_qsbr_update API to indicate that they
> entered a quiescent state. This API checks if a writer has triggered a
> quiescent state query and update the state accordingly.
>
> Patch v8:
> 1) Library changes
> a) Symbols prefixed with '__RTE' or 'rte_' as required (Thomas)
> b) Used PRI?64 macros to support 32b compilation (Thomas)
> c) Fixed shared library compilation (Thomas)
> 2) Test cases
> a) Fixed segmentation fault when more than 20 cores are used for testing (Jerin)
> b) Used PRI?64 macros to support 32b compilation (Thomas)
> c) Testing done on x86, ThunderX2, Octeon TX, BlueField for 32b(x86 only)/64b,
> debug/non-debug, shared/static linking, meson/makefile with various
> number of cores
>
> Patch v7:
> 1) Library changes
> a) Added macro RCU_IS_LOCK_CNT_ZERO
> b) Added lock counter validation to rte_rcu_qsbr_thread_online/
> rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
> rte_rcu_qsbr_thread_unregister APIs (Paul)
>
> Patch v6:
> 1) Library changes
> a) Fixed and tested meson build on Arm and x86 (Konstantin)
> b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
>
> Patch v5:
> 1) Library changes
> a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
> c) Clarified the need for 64b counters (Paul)
> 2) Test cases
> a) Added additional performance test cases to benchmark
> __rcu_qsbr_check_all
> b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
> 3) Documentation
> a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
>
> Patch v4:
> 1) Library changes
> a) Fixed the compilation issue on x86 (Konstantin)
> b) Rebased with latest master
>
> Patch v3:
> 1) Library changes
> a) Moved the registered thread ID array to the end of the
> structure (Konstantin)
> b) Removed the compile time constant RTE_RCU_MAX_THREADS
> c) Added code to keep track of registered number of threads
>
> Patch v2:
> 1) Library changes
> a) Corrected the RTE_ASSERT checks (Konstantin)
> b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
> c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
> d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
> e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
> f) Removed the macro to access the thread QS counters (Konstantin)
> 2) Test cases
> a) Added additional test cases for removing RTE_ASSERT
> 3) Documentation
> a) Changed the figure to make it bigger (Marko)
> b) Spelling and format corrections (Marko)
>
> Patch v1:
> 1) Library changes
> a) Changed the maximum number of reader threads to 1024
> b) Renamed rte_rcu_qsbr_register/unregister_thread to
> rte_rcu_qsbr_thread_register/unregister
> c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
> version of rte_rcu_qsbr_thread_register/unregister API. These
> also provide the flexibility for performance when the requested
> maximum number of threads is higher than the current number of
> threads.
> d) Corrected memory orderings in rte_rcu_qsbr_update
> e) Changed the signature of rte_rcu_qsbr_start API to return the token
> f) Changed the signature of rte_rcu_qsbr_start API to not take the
> expected number of QS states to wait.
> g) Added debug logs
> h) Added API and programmer guide documentation.
>
> RFC v3:
> 1) Library changes
> a) Rebased to latest master
> b) Added new API rte_rcu_qsbr_get_memsize
> c) Add support for memory allocation for QSBR variable (Konstantin)
> d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
> 2) Testcase changes
> a) Separated stress tests into a performance test case file
> b) Added performance statistics
>
> RFC v2:
> 1) Cover letter changes
> a) Explian the parameters that affect the overhead of using RCU
> and their effect
> b) Explain how this library addresses these effects to keep
> the overhead to minimum
> 2) Library changes
> a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
> b) Simplify the code/remove APIs to keep this library inline with
> other synchronisation mechanisms like locks (Konstantin)
> c) Change the design to support more than 64 threads (Konstantin)
> d) Fixed version map to remove static inline functions
> 3) Testcase changes
> a) Add boundary and additional functional test cases
> b) Add stress test cases (Paul E. McKenney)
>
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (3):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
> doc: added RCU to the release notes
>
> MAINTAINERS | 5 +
> app/test/Makefile | 2 +
> app/test/autotest_data.py | 12 +
> app/test/meson.build | 7 +-
> app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
> app/test/test_rcu_qsbr_perf.c | 704 ++++++++++++
> config/common_base | 6 +
> doc/api/doxy-api-index.md | 3 +-
> doc/api/doxy-api.conf.in | 1 +
> .../prog_guide/img/rcu_general_info.svg | 509 +++++++++
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/rcu_lib.rst | 185 +++
> doc/guides/rel_notes/release_19_05.rst | 8 +
> lib/Makefile | 2 +
> lib/librte_rcu/Makefile | 23 +
> lib/librte_rcu/meson.build | 7 +
> lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++
> lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++
> lib/librte_rcu/rte_rcu_version.map | 13 +
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 21 files changed, 3420 insertions(+), 3 deletions(-)
> create mode 100644 app/test/test_rcu_qsbr.c
> create mode 100644 app/test/test_rcu_qsbr_perf.c
> create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
> create mode 100644 doc/guides/prog_guide/rcu_lib.rst
> create mode 100644 lib/librte_rcu/Makefile
> create mode 100644 lib/librte_rcu/meson.build
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
> create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
> create mode 100644 lib/librte_rcu/rte_rcu_version.map
>
> --
Run UT on my box (SKX) for both x86_64 and i686 over 96 cores.
All passed.
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 1/4] rcu: " Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
@ 2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2019-04-29 20:33 ` Thomas Monjalon
2 siblings, 2 replies; 260+ messages in thread
From: Ruifeng Wang (Arm Technology China) @ 2019-04-28 3:25 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Friday, April 26, 2019 12:40
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR
> mechanism
>
> Add RCU library supporting quiescent state based memory reclamation
> method.
> This library helps identify the quiescent state of the reader threads so that
> the writers can free the memory associated with the lock less data structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> ---
Compiled DPDK as both static library and shared library.
Ran UT on ARMv8 LS2088a DPAA2 platform, 3 to 8 cores were used, tests passed.
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
@ 2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2019-04-29 20:33 ` Thomas Monjalon
1 sibling, 0 replies; 260+ messages in thread
From: Ruifeng Wang (Arm Technology China) @ 2019-04-28 3:25 UTC (permalink / raw)
To: Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, dev
Cc: Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Honnappa Nagarahalli
> Sent: Friday, April 26, 2019 12:40
> To: konstantin.ananyev@intel.com; stephen@networkplumber.org;
> paulmck@linux.ibm.com; marko.kovacevic@intel.com; dev@dpdk.org
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar
> <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>
> Subject: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR
> mechanism
>
> Add RCU library supporting quiescent state based memory reclamation
> method.
> This library helps identify the quiescent state of the reader threads so that
> the writers can free the memory associated with the lock less data structures.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> ---
Compiled DPDK as both static library and shared library.
Ran UT on ARMv8 LS2088a DPAA2 platform, 3 to 8 cores were used, tests passed.
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
@ 2019-04-29 20:33 ` Thomas Monjalon
2019-04-29 20:33 ` Thomas Monjalon
2019-04-30 10:51 ` Hemant Agrawal
1 sibling, 2 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-29 20:33 UTC (permalink / raw)
To: hemant.agrawal
Cc: dev, Ruifeng Wang (Arm Technology China),
Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> Compiled DPDK as both static library and shared library.
> Ran UT on ARMv8 LS2088a DPAA2 platform, 3 to 8 cores were used, tests passed.
>
> Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Hemant, did you have the opportunity to test it yourself?
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-29 20:33 ` Thomas Monjalon
@ 2019-04-29 20:33 ` Thomas Monjalon
2019-04-30 10:51 ` Hemant Agrawal
1 sibling, 0 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-29 20:33 UTC (permalink / raw)
To: hemant.agrawal
Cc: dev, Ruifeng Wang (Arm Technology China),
Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> Compiled DPDK as both static library and shared library.
> Ran UT on ARMv8 LS2088a DPAA2 platform, 3 to 8 cores were used, tests passed.
>
> Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Hemant, did you have the opportunity to test it yourself?
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
@ 2019-04-29 20:35 ` Thomas Monjalon
2019-04-29 20:35 ` Thomas Monjalon
2019-04-30 4:20 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-29 20:35 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
gavin.hu, dharmik.thakkar, malvika.gupta
Please, could you check details like alignment or alphabetical sorting?
Thanks
26/04/2019 06:39, Honnappa Nagarahalli:
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -111,6 +111,8 @@ test_sources = files('commands.c',
> 'test_timer_racecond.c',
> 'test_timer_secondary.c',
> 'test_ticketlock.c',
> + 'test_rcu_qsbr.c',
> + 'test_rcu_qsbr_perf.c',
> 'test_version.c',
> 'virtual_pmd.c'
> )
> @@ -137,7 +139,8 @@ test_deps = ['acl',
> 'reorder',
> 'ring',
> 'stack',
> - 'timer'
> + 'timer',
> + 'rcu'
> ]
>
> # All test cases in fast_parallel_test_names list are parallel
> @@ -176,6 +179,7 @@ fast_parallel_test_names = [
> 'ring_autotest',
> 'ring_pmd_autotest',
> 'rwlock_autotest',
> + 'rcu_qsbr_autotest',
> 'sched_autotest',
> 'spinlock_autotest',
> 'stack_autotest',
> @@ -243,6 +247,7 @@ perf_test_names = [
> 'red_perf',
> 'distributor_perf_autotest',
> 'ring_pmd_perf_autotest',
> + 'rcu_qsbr_perf_autotest',
> 'pmd_perf_autotest',
> 'stack_perf_autotest',
> 'stack_nb_perf_autotest',
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-29 20:35 ` Thomas Monjalon
@ 2019-04-29 20:35 ` Thomas Monjalon
2019-04-30 4:20 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-29 20:35 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
gavin.hu, dharmik.thakkar, malvika.gupta
Please, could you check details like alignment or alphabetical sorting?
Thanks
26/04/2019 06:39, Honnappa Nagarahalli:
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -111,6 +111,8 @@ test_sources = files('commands.c',
> 'test_timer_racecond.c',
> 'test_timer_secondary.c',
> 'test_ticketlock.c',
> + 'test_rcu_qsbr.c',
> + 'test_rcu_qsbr_perf.c',
> 'test_version.c',
> 'virtual_pmd.c'
> )
> @@ -137,7 +139,8 @@ test_deps = ['acl',
> 'reorder',
> 'ring',
> 'stack',
> - 'timer'
> + 'timer',
> + 'rcu'
> ]
>
> # All test cases in fast_parallel_test_names list are parallel
> @@ -176,6 +179,7 @@ fast_parallel_test_names = [
> 'ring_autotest',
> 'ring_pmd_autotest',
> 'rwlock_autotest',
> + 'rcu_qsbr_autotest',
> 'sched_autotest',
> 'spinlock_autotest',
> 'stack_autotest',
> @@ -243,6 +247,7 @@ perf_test_names = [
> 'red_perf',
> 'distributor_perf_autotest',
> 'ring_pmd_perf_autotest',
> + 'rcu_qsbr_perf_autotest',
> 'pmd_perf_autotest',
> 'stack_perf_autotest',
> 'stack_nb_perf_autotest',
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-29 20:35 ` Thomas Monjalon
2019-04-29 20:35 ` Thomas Monjalon
@ 2019-04-30 4:20 ` Honnappa Nagarahalli
2019-04-30 4:20 ` Honnappa Nagarahalli
2019-04-30 7:58 ` Thomas Monjalon
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-30 4:20 UTC (permalink / raw)
To: thomas
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> Please, could you check details like alignment or alphabetical sorting?
I don't see any alignment issues (there is mixed use of tabs and spaces in this file, will use the same).
> Thanks
>
> 26/04/2019 06:39, Honnappa Nagarahalli:
> > --- a/app/test/meson.build
> > +++ b/app/test/meson.build
> > @@ -111,6 +111,8 @@ test_sources = files('commands.c',
> > 'test_timer_racecond.c',
> > 'test_timer_secondary.c',
> > 'test_ticketlock.c',
> > + 'test_rcu_qsbr.c',
> > + 'test_rcu_qsbr_perf.c',
> > 'test_version.c',
> > 'virtual_pmd.c'
> > )
> > @@ -137,7 +139,8 @@ test_deps = ['acl',
> > 'reorder',
> > 'ring',
> > 'stack',
> > - 'timer'
> > + 'timer',
> > + 'rcu'
> > ]
> >
> > # All test cases in fast_parallel_test_names list are parallel @@
> > -176,6 +179,7 @@ fast_parallel_test_names = [
> > 'ring_autotest',
> > 'ring_pmd_autotest',
> > 'rwlock_autotest',
> > + 'rcu_qsbr_autotest',
> > 'sched_autotest',
> > 'spinlock_autotest',
> > 'stack_autotest',
> > @@ -243,6 +247,7 @@ perf_test_names = [
This list is not sorted, is it on purpose? If yes, are we supposed to be adding at the end of the list?
> > 'red_perf',
> > 'distributor_perf_autotest',
> > 'ring_pmd_perf_autotest',
> > + 'rcu_qsbr_perf_autotest',
> > 'pmd_perf_autotest',
> > 'stack_perf_autotest',
> > 'stack_nb_perf_autotest',
>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-30 4:20 ` Honnappa Nagarahalli
@ 2019-04-30 4:20 ` Honnappa Nagarahalli
2019-04-30 7:58 ` Thomas Monjalon
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-30 4:20 UTC (permalink / raw)
To: thomas
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> Please, could you check details like alignment or alphabetical sorting?
I don't see any alignment issues (there is mixed use of tabs and spaces in this file, will use the same).
> Thanks
>
> 26/04/2019 06:39, Honnappa Nagarahalli:
> > --- a/app/test/meson.build
> > +++ b/app/test/meson.build
> > @@ -111,6 +111,8 @@ test_sources = files('commands.c',
> > 'test_timer_racecond.c',
> > 'test_timer_secondary.c',
> > 'test_ticketlock.c',
> > + 'test_rcu_qsbr.c',
> > + 'test_rcu_qsbr_perf.c',
> > 'test_version.c',
> > 'virtual_pmd.c'
> > )
> > @@ -137,7 +139,8 @@ test_deps = ['acl',
> > 'reorder',
> > 'ring',
> > 'stack',
> > - 'timer'
> > + 'timer',
> > + 'rcu'
> > ]
> >
> > # All test cases in fast_parallel_test_names list are parallel @@
> > -176,6 +179,7 @@ fast_parallel_test_names = [
> > 'ring_autotest',
> > 'ring_pmd_autotest',
> > 'rwlock_autotest',
> > + 'rcu_qsbr_autotest',
> > 'sched_autotest',
> > 'spinlock_autotest',
> > 'stack_autotest',
> > @@ -243,6 +247,7 @@ perf_test_names = [
This list is not sorted, is it on purpose? If yes, are we supposed to be adding at the end of the list?
> > 'red_perf',
> > 'distributor_perf_autotest',
> > 'ring_pmd_perf_autotest',
> > + 'rcu_qsbr_perf_autotest',
> > 'pmd_perf_autotest',
> > 'stack_perf_autotest',
> > 'stack_nb_perf_autotest',
>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-30 4:20 ` Honnappa Nagarahalli
2019-04-30 4:20 ` Honnappa Nagarahalli
@ 2019-04-30 7:58 ` Thomas Monjalon
2019-04-30 7:58 ` Thomas Monjalon
1 sibling, 1 reply; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-30 7:58 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
30/04/2019 06:20, Honnappa Nagarahalli:
> >
> > Please, could you check details like alignment or alphabetical sorting?
> I don't see any alignment issues (there is mixed use of tabs and spaces in this file, will use the same).
>
> > Thanks
> >
> > 26/04/2019 06:39, Honnappa Nagarahalli:
> > > --- a/app/test/meson.build
> > > +++ b/app/test/meson.build
> > > @@ -111,6 +111,8 @@ test_sources = files('commands.c',
> > > 'test_timer_racecond.c',
> > > 'test_timer_secondary.c',
> > > 'test_ticketlock.c',
> > > + 'test_rcu_qsbr.c',
> > > + 'test_rcu_qsbr_perf.c',
> > > 'test_version.c',
> > > 'virtual_pmd.c'
> > > )
> > > @@ -137,7 +139,8 @@ test_deps = ['acl',
> > > 'reorder',
> > > 'ring',
> > > 'stack',
> > > - 'timer'
> > > + 'timer',
> > > + 'rcu'
> > > ]
> > >
> > > # All test cases in fast_parallel_test_names list are parallel @@
> > > -176,6 +179,7 @@ fast_parallel_test_names = [
> > > 'ring_autotest',
> > > 'ring_pmd_autotest',
> > > 'rwlock_autotest',
> > > + 'rcu_qsbr_autotest',
> > > 'sched_autotest',
> > > 'spinlock_autotest',
> > > 'stack_autotest',
> > > @@ -243,6 +247,7 @@ perf_test_names = [
> This list is not sorted, is it on purpose? If yes, are we supposed to be adding at the end of the list?
It looks mostly sorted.
I think you should insert rcu before red.
> > > 'red_perf',
> > > 'distributor_perf_autotest',
> > > 'ring_pmd_perf_autotest',
> > > + 'rcu_qsbr_perf_autotest',
> > > 'pmd_perf_autotest',
> > > 'stack_perf_autotest',
> > > 'stack_nb_perf_autotest',
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests
2019-04-30 7:58 ` Thomas Monjalon
@ 2019-04-30 7:58 ` Thomas Monjalon
0 siblings, 0 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-04-30 7:58 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
30/04/2019 06:20, Honnappa Nagarahalli:
> >
> > Please, could you check details like alignment or alphabetical sorting?
> I don't see any alignment issues (there is mixed use of tabs and spaces in this file, will use the same).
>
> > Thanks
> >
> > 26/04/2019 06:39, Honnappa Nagarahalli:
> > > --- a/app/test/meson.build
> > > +++ b/app/test/meson.build
> > > @@ -111,6 +111,8 @@ test_sources = files('commands.c',
> > > 'test_timer_racecond.c',
> > > 'test_timer_secondary.c',
> > > 'test_ticketlock.c',
> > > + 'test_rcu_qsbr.c',
> > > + 'test_rcu_qsbr_perf.c',
> > > 'test_version.c',
> > > 'virtual_pmd.c'
> > > )
> > > @@ -137,7 +139,8 @@ test_deps = ['acl',
> > > 'reorder',
> > > 'ring',
> > > 'stack',
> > > - 'timer'
> > > + 'timer',
> > > + 'rcu'
> > > ]
> > >
> > > # All test cases in fast_parallel_test_names list are parallel @@
> > > -176,6 +179,7 @@ fast_parallel_test_names = [
> > > 'ring_autotest',
> > > 'ring_pmd_autotest',
> > > 'rwlock_autotest',
> > > + 'rcu_qsbr_autotest',
> > > 'sched_autotest',
> > > 'spinlock_autotest',
> > > 'stack_autotest',
> > > @@ -243,6 +247,7 @@ perf_test_names = [
> This list is not sorted, is it on purpose? If yes, are we supposed to be adding at the end of the list?
It looks mostly sorted.
I think you should insert rcu before red.
> > > 'red_perf',
> > > 'distributor_perf_autotest',
> > > 'ring_pmd_perf_autotest',
> > > + 'rcu_qsbr_perf_autotest',
> > > 'pmd_perf_autotest',
> > > 'stack_perf_autotest',
> > > 'stack_nb_perf_autotest',
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-29 20:33 ` Thomas Monjalon
2019-04-29 20:33 ` Thomas Monjalon
@ 2019-04-30 10:51 ` Hemant Agrawal
2019-04-30 10:51 ` Hemant Agrawal
1 sibling, 1 reply; 260+ messages in thread
From: Hemant Agrawal @ 2019-04-30 10:51 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Ruifeng Wang (Arm Technology China),
Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> Subject: Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR
> mechanism
> Importance: High
>
> > Compiled DPDK as both static library and shared library.
> > Ran UT on ARMv8 LS2088a DPAA2 platform, 3 to 8 cores were used, tests
> passed.
> >
> > Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
>
> Hemant, did you have the opportunity to test it yourself?
>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR mechanism
2019-04-30 10:51 ` Hemant Agrawal
@ 2019-04-30 10:51 ` Hemant Agrawal
0 siblings, 0 replies; 260+ messages in thread
From: Hemant Agrawal @ 2019-04-30 10:51 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Ruifeng Wang (Arm Technology China),
Honnappa Nagarahalli, konstantin.ananyev, stephen, paulmck,
marko.kovacevic, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
> -----Original Message-----
> Subject: Re: [dpdk-dev] [PATCH v8 1/4] rcu: add RCU library supporting QSBR
> mechanism
> Importance: High
>
> > Compiled DPDK as both static library and shared library.
> > Ran UT on ARMv8 LS2088a DPAA2 platform, 3 to 8 cores were used, tests
> passed.
> >
> > Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
>
> Hemant, did you have the opportunity to test it yourself?
>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
` (13 preceding siblings ...)
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
` (6 more replies)
14 siblings, 7 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v9:
1) Files/test cases/function names listed alphabetically where required (Thomas)
Patch v8:
1) Library changes
a) Symbols prefixed with '__RTE' or 'rte_' as required (Thomas)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Fixed shared library compilation (Thomas)
2) Test cases
a) Fixed segmentation fault when more than 20 cores are used for testing (Jerin)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Testing done on x86, ThunderX2, Octeon TX, BlueField for 32b(x86 only)/64b,
debug/non-debug, shared/static linking, meson/makefile with various
number of cores
Patch v7:
1) Library changes
a) Added macro RCU_IS_LOCK_CNT_ZERO
b) Added lock counter validation to rte_rcu_qsbr_thread_online/
rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
rte_rcu_qsbr_thread_unregister APIs (Paul)
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (3):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
doc: added RCU to the release notes
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
doc/guides/rel_notes/release_19_05.rst | 8 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
21 files changed, 3419 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 1/4] rcu: " Honnappa Nagarahalli
` (5 subsequent siblings)
6 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Lock-less data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
(for ex: real-time applications).
In the following paras, the term 'memory' refers to memory allocated
by typical APIs like malloc or anything that is representative of
memory, for ex: an index of a free element array.
Since these data structures are lock less, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
to the allocator, without knowing that the readers are not
referencing that element/memory anymore. Hence, it is required to
separate the operation of removing an element into 2 steps:
Delete: in this step, the writer removes the reference to the element from
the data structure but does not return the associated memory to the
allocator. This will ensure that new readers will not get a reference to
the removed element. Removing the reference is an atomic operation.
Free(Reclaim): in this step, the writer returns the memory to the
memory allocator, only after knowing that all the readers have stopped
referencing the deleted element.
This library helps the writer determine when it is safe to free the
memory.
This library makes use of thread Quiescent State (QS). QS can be
defined as 'any point in the thread execution where the thread does
not hold a reference to shared memory'. It is upto the application to
determine its quiescent state. Let us consider the following diagram:
Time -------------------------------------------------->
| |
RT1 $++++****D1****+++***D2*|**+++|+++**D3*****++++$
| |
RT2 $++++****D1****++|+**D2|***++++++**D3*****++++$
| |
RT3 $++++****D1****+++***|D2***|++++++**D2*****++++$
| |
|<--->|
Del | Free
|
Cannot free memory
during this period
(Grace Period)
RTx - Reader thread
< and > - Start and end of while(1) loop
***Dx*** - Reader thread is accessing the shared data structure Dx.
i.e. critical section.
+++ - Reader thread is not accessing any shared data structure.
i.e. non critical section or quiescent state.
Del - Point in time when the reference to the entry is removed using
atomic operation.
Free - Point in time when the writer can free the entry.
Grace Period - Time duration between Del and Free, during which memory cannot
be freed.
As shown, thread RT1 accesses data structures D1, D2 and D3. When it is
accessing D2, if the writer has to remove an element from D2, the
writer cannot free the memory associated with that element immediately.
The writer can return the memory to the allocator only after the reader
stops referencing D2. In other words, reader thread RT1 has to enter
a quiescent state.
Similarly, since thread RT3 is also accessing D2, writer has to wait till
RT3 enters quiescent state as well.
However, the writer does not need to wait for RT2 to enter quiescent state.
Thread RT2 was not accessing D2 when the delete operation happened.
So, RT2 will not get a reference to the deleted entry.
It can be noted that, the critical sections for D2 and D3 are quiescent states
for D1. i.e. for a given data structure Dx, any point in the thread execution
that does not reference Dx is a quiescent state.
Since memory is not freed immediately, there might be a need for
provisioning of additional memory, depending on the application requirements.
It is important to make sure that this library keeps the overhead of
identifying the end of grace period and subsequent freeing of memory,
to a minimum. The following paras explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during grace period. Longer grace periods
exasperate these conditions.
The length of the critical section and the number of reader threads
is proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles(due to additional
reporting) in the readers.
Hence, we need the characteristics of small grace period and large critical
section. This library addresses this by allowing the writer to do
other work without having to block till the readers report their quiescent
state.
For DPDK applications, the start and end of while(1) loop (where no
references to shared data structures are kept) act as perfect quiescent
states. This will combine all the shared data structure accesses into a
single, large critical section which helps keep the overhead on the
reader side to a minimum.
DPDK supports pipeline model of packet processing and service cores.
In these use cases, a given data structure may not be used by all the
workers in the application. The writer does not have to wait for all
the workers to report their quiescent state. To provide the required
flexibility, this library has a concept of QS variable. The application
can create one QS variable per data structure to help it track the
end of grace period for each data structure. This helps keep the grace
period to a minimum.
The application has to allocate memory and initialize a QS variable.
Application can call rte_rcu_qsbr_get_memsize to calculate the size
of memory to allocate. This API takes maximum number of reader threads,
using this variable, as a parameter. Currently, a maximum of 1024 threads
are supported.
Further, the application can initialize a QS variable using the API
rte_rcu_qsbr_init.
Each reader thread is assumed to have a unique thread ID. Currently, the
management of the thread ID (for ex: allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use lcore_id as the thread ID where applicable.
rte_rcu_qsbr_thread_register API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
The reader thread must call rte_rcu_qsbr_thread_online API to start reporting
its quiescent state.
Some of the use cases might require the reader threads to make
blocking API calls (for ex: while using eventdev APIs). The writer thread
should not wait for such reader threads to enter quiescent state.
The reader thread must call rte_rcu_qsbr_thread_offline API, before calling
blocking APIs. It can call rte_rcu_qsbr_thread_online API once the blocking
API call returns.
The writer thread can trigger the reader threads to report their quiescent
state by calling the API rte_rcu_qsbr_start. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
rte_rcu_qsbr_start returns a token to each caller.
The writer thread has to call rte_rcu_qsbr_check API with the token to get the
current quiescent state status. Option to block till all the reader threads
enter the quiescent state is provided. If this API indicates that all the
reader threads have entered the quiescent state, the application can free the
deleted entry.
The APIs rte_rcu_qsbr_start and rte_rcu_qsbr_check are lock free. Hence, they
can be called concurrently from multiple writers even while running
as worker threads.
The separation of triggering the reporting from querying the status provides
the writer threads flexibility to do useful work instead of blocking for the
reader threads to enter the quiescent state or go offline. This reduces the
memory accesses due to continuous polling for the status.
rte_rcu_qsbr_synchronize API combines the functionality of rte_rcu_qsbr_start
and blocking rte_rcu_qsbr_check into a single API. This API triggers the reader
threads to report their quiescent state and polls till all the readers enter
the quiescent state or go offline. This API does not allow the writer to
do useful work while waiting and also introduces additional memory accesses
due to continuous polling.
The reader thread must call rte_rcu_qsbr_thread_offline and
rte_rcu_qsbr_thread_unregister APIs to remove itself from reporting its
quiescent state. The rte_rcu_qsbr_check API will not wait for this reader
thread to report the quiescent state status anymore.
The reader threads should call rte_rcu_qsbr_update API to indicate that they
entered a quiescent state. This API checks if a writer has triggered a
quiescent state query and update the state accordingly.
Patch v9:
1) Files/test cases/function names listed alphabetically where required (Thomas)
Patch v8:
1) Library changes
a) Symbols prefixed with '__RTE' or 'rte_' as required (Thomas)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Fixed shared library compilation (Thomas)
2) Test cases
a) Fixed segmentation fault when more than 20 cores are used for testing (Jerin)
b) Used PRI?64 macros to support 32b compilation (Thomas)
c) Testing done on x86, ThunderX2, Octeon TX, BlueField for 32b(x86 only)/64b,
debug/non-debug, shared/static linking, meson/makefile with various
number of cores
Patch v7:
1) Library changes
a) Added macro RCU_IS_LOCK_CNT_ZERO
b) Added lock counter validation to rte_rcu_qsbr_thread_online/
rte_rcu_qsbr_thread_offline/rte_rcu_qsbr_thread_register/
rte_rcu_qsbr_thread_unregister APIs (Paul)
Patch v6:
1) Library changes
a) Fixed and tested meson build on Arm and x86 (Konstantin)
b) Moved rte_rcu_qsbr_synchronize API to rte_rcu_qsbr.c
Patch v5:
1) Library changes
a) Removed extra alignment in rte_rcu_qsbr_get_memsize API (Paul)
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock APIs (Paul)
c) Clarified the need for 64b counters (Paul)
2) Test cases
a) Added additional performance test cases to benchmark
__rcu_qsbr_check_all
b) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock calls in various test cases
3) Documentation
a) Added rte_rcu_qsbr_lock/rte_rcu_qsbr_unlock usage description
Patch v4:
1) Library changes
a) Fixed the compilation issue on x86 (Konstantin)
b) Rebased with latest master
Patch v3:
1) Library changes
a) Moved the registered thread ID array to the end of the
structure (Konstantin)
b) Removed the compile time constant RTE_RCU_MAX_THREADS
c) Added code to keep track of registered number of threads
Patch v2:
1) Library changes
a) Corrected the RTE_ASSERT checks (Konstantin)
b) Replaced RTE_ASSERT with 'if' checks for non-datapath APIs (Konstantin)
c) Made rte_rcu_qsbr_thread_register/unregister non-datapath critical APIs
d) Renamed rte_rcu_qsbr_update to rte_rcu_qsbr_quiescent (Ola)
e) Used rte_smp_mb() in rte_rcu_qsbr_thread_online API for x86 (Konstantin)
f) Removed the macro to access the thread QS counters (Konstantin)
2) Test cases
a) Added additional test cases for removing RTE_ASSERT
3) Documentation
a) Changed the figure to make it bigger (Marko)
b) Spelling and format corrections (Marko)
Patch v1:
1) Library changes
a) Changed the maximum number of reader threads to 1024
b) Renamed rte_rcu_qsbr_register/unregister_thread to
rte_rcu_qsbr_thread_register/unregister
c) Added rte_rcu_qsbr_thread_online/offline API. These are optimized
version of rte_rcu_qsbr_thread_register/unregister API. These
also provide the flexibility for performance when the requested
maximum number of threads is higher than the current number of
threads.
d) Corrected memory orderings in rte_rcu_qsbr_update
e) Changed the signature of rte_rcu_qsbr_start API to return the token
f) Changed the signature of rte_rcu_qsbr_start API to not take the
expected number of QS states to wait.
g) Added debug logs
h) Added API and programmer guide documentation.
RFC v3:
1) Library changes
a) Rebased to latest master
b) Added new API rte_rcu_qsbr_get_memsize
c) Add support for memory allocation for QSBR variable (Konstantin)
d) Fixed a bug in rte_rcu_qsbr_check (Konstantin)
2) Testcase changes
a) Separated stress tests into a performance test case file
b) Added performance statistics
RFC v2:
1) Cover letter changes
a) Explian the parameters that affect the overhead of using RCU
and their effect
b) Explain how this library addresses these effects to keep
the overhead to minimum
2) Library changes
a) Rename the library to avoid confusion (Matias, Bruce, Konstantin)
b) Simplify the code/remove APIs to keep this library inline with
other synchronisation mechanisms like locks (Konstantin)
c) Change the design to support more than 64 threads (Konstantin)
d) Fixed version map to remove static inline functions
3) Testcase changes
a) Add boundary and additional functional test cases
b) Add stress test cases (Paul E. McKenney)
Dharmik Thakkar (1):
test/rcu_qsbr: add API and functional tests
Honnappa Nagarahalli (3):
rcu: add RCU library supporting QSBR mechanism
doc/rcu: add lib_rcu documentation
doc: added RCU to the release notes
MAINTAINERS | 5 +
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 ++++++++++++
config/common_base | 6 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 +++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++
doc/guides/rel_notes/release_19_05.rst | 8 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 +
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
21 files changed, 3419 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 1/4] rcu: add RCU library supporting QSBR mechanism
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
` (4 subsequent siblings)
6 siblings, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 976 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 4493aa636..5d25b21f0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1281,6 +1281,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 4236c2a67..6b96e0e80 100644
--- a/config/common_base
+++ b/config/common_base
@@ -838,6 +838,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..b4ed01045
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += __RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = __RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t, id;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %zu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread IDs = ");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "%d ", id + t);
+
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %"PRIu64"\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %"PRIu64", lock count = %u\n",
+ id + t,
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].cnt,
+ __ATOMIC_RELAXED),
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].lock_cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rte_rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rte_rcu_log_type = rte_log_register("lib.rcu");
+ if (rte_rcu_log_type >= 0)
+ rte_log_set_level(rte_rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..9727f4922
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,641 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rte_rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define __RTE_RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define __RTE_RCU_DP_LOG(level, fmt, args...)
+#endif
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
+ if (v->qsbr_cnt[thread_id].lock_cnt) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args); \
+} while (0)
+#else
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define __RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define __RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define __RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define __RTE_QSBR_THRID_INDEX_SHIFT 6
+#define __RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define __RTE_QSBR_CNT_THR_OFFLINE 0
+#define __RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ __RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
+ "Lock counter %u. Nested locks?\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ __RTE_RCU_DP_LOG(DEBUG, "%s: update: token = %"PRIu64", Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = __RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Bit Map = 0x%"PRIx64", Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c !=
+ __RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == __RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rte_rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rte_rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..f8b9ef2ab
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,13 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_log_type;
+ rte_rcu_qsbr_dump;
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index a379dd682..e067ce5ea 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'rcu', 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f020bb10c..7c9b4b538 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 1/4] rcu: add RCU library supporting QSBR mechanism
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 1/4] rcu: " Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add RCU library supporting quiescent state based memory reclamation method.
This library helps identify the quiescent state of the reader threads so
that the writers can free the memory associated with the lock less data
structures.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
---
MAINTAINERS | 5 +
config/common_base | 6 +
lib/Makefile | 2 +
lib/librte_rcu/Makefile | 23 ++
lib/librte_rcu/meson.build | 7 +
lib/librte_rcu/rte_rcu_qsbr.c | 277 +++++++++++++
lib/librte_rcu/rte_rcu_qsbr.h | 641 +++++++++++++++++++++++++++++
lib/librte_rcu/rte_rcu_version.map | 13 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
10 files changed, 976 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_rcu/Makefile
create mode 100644 lib/librte_rcu/meson.build
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.c
create mode 100644 lib/librte_rcu/rte_rcu_qsbr.h
create mode 100644 lib/librte_rcu/rte_rcu_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 4493aa636..5d25b21f0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1281,6 +1281,11 @@ F: examples/bpf/
F: app/test/test_bpf.c
F: doc/guides/prog_guide/bpf_lib.rst
+RCU - EXPERIMENTAL
+M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+F: lib/librte_rcu/
+F: doc/guides/prog_guide/rcu_lib.rst
+
Test Applications
-----------------
diff --git a/config/common_base b/config/common_base
index 4236c2a67..6b96e0e80 100644
--- a/config/common_base
+++ b/config/common_base
@@ -838,6 +838,12 @@ CONFIG_RTE_LIBRTE_LATENCY_STATS=y
#
CONFIG_RTE_LIBRTE_TELEMETRY=n
+#
+# Compile librte_rcu
+#
+CONFIG_RTE_LIBRTE_RCU=y
+CONFIG_RTE_LIBRTE_RCU_DEBUG=n
+
#
# Compile librte_lpm
#
diff --git a/lib/Makefile b/lib/Makefile
index 26021d0c0..791e0d991 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
+DEPDIRS-librte_rcu := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_rcu/Makefile b/lib/librte_rcu/Makefile
new file mode 100644
index 000000000..6aa677bd1
--- /dev/null
+++ b/lib/librte_rcu/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_rcu.a
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_rcu_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) := rte_rcu_qsbr.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RCU)-include := rte_rcu_qsbr.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_rcu/meson.build b/lib/librte_rcu/meson.build
new file mode 100644
index 000000000..0c2d5a2e0
--- /dev/null
+++ b/lib/librte_rcu/meson.build
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Arm Limited
+
+allow_experimental_apis = true
+
+sources = files('rte_rcu_qsbr.c')
+headers = files('rte_rcu_qsbr.h')
diff --git a/lib/librte_rcu/rte_rcu_qsbr.c b/lib/librte_rcu/rte_rcu_qsbr.c
new file mode 100644
index 000000000..b4ed01045
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.c
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_errno.h>
+
+#include "rte_rcu_qsbr.h"
+
+/* Get the memory size of QSBR variable */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads)
+{
+ size_t sz;
+
+ if (max_threads == 0) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid max_threads %u\n",
+ __func__, max_threads);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = sizeof(struct rte_rcu_qsbr);
+
+ /* Add the size of quiescent state counter array */
+ sz += sizeof(struct rte_rcu_qsbr_cnt) * max_threads;
+
+ /* Add the size of the registered thread ID bitmap array */
+ sz += __RTE_QSBR_THRID_ARRAY_SIZE(max_threads);
+
+ return sz;
+}
+
+/* Initialize a quiescent state variable */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
+{
+ size_t sz;
+
+ if (v == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ sz = rte_rcu_qsbr_get_memsize(max_threads);
+ if (sz == 1)
+ return 1;
+
+ /* Set all the threads to offline */
+ memset(v, 0, sz);
+ v->max_threads = max_threads;
+ v->num_elems = RTE_ALIGN_MUL_CEIL(max_threads,
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) /
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE;
+ v->token = __RTE_QSBR_CNT_INIT;
+
+ return 0;
+}
+
+/* Register a reader thread to report its quiescent state
+ * on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already registered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & 1UL << id)
+ return 0;
+
+ do {
+ new_bmap = old_bmap | (1UL << id);
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_add(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & (1UL << id))
+ /* Someone else registered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ unsigned int i, id, success;
+ uint64_t old_bmap, new_bmap;
+
+ if (v == NULL || thread_id >= v->max_threads) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ id = thread_id & __RTE_QSBR_THRID_MASK;
+ i = thread_id >> __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ /* Make sure that the counter for registered threads does not
+ * go out of sync. Hence, additional checks are required.
+ */
+ /* Check if the thread is already unregistered */
+ old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_RELAXED);
+ if (old_bmap & ~(1UL << id))
+ return 0;
+
+ do {
+ new_bmap = old_bmap & ~(1UL << id);
+ /* Make sure any loads of the shared data structure are
+ * completed before removal of the thread from the list of
+ * reporting threads.
+ */
+ success = __atomic_compare_exchange(
+ __RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ &old_bmap, &new_bmap, 0,
+ __ATOMIC_RELEASE, __ATOMIC_RELAXED);
+
+ if (success)
+ __atomic_fetch_sub(&v->num_threads,
+ 1, __ATOMIC_RELAXED);
+ else if (old_bmap & ~(1UL << id))
+ /* Someone else unregistered this thread.
+ * Counter should not be incremented.
+ */
+ return 0;
+ } while (success == 0);
+
+ return 0;
+}
+
+/* Wait till the reader threads have entered quiescent state. */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ t = rte_rcu_qsbr_start(v);
+
+ /* If the current thread has readside critical section,
+ * update its quiescent state status.
+ */
+ if (thread_id != RTE_QSBR_THRID_INVALID)
+ rte_rcu_qsbr_quiescent(v, thread_id);
+
+ /* Wait for other readers to enter quiescent state */
+ rte_rcu_qsbr_check(v, t, true);
+}
+
+/* Dump the details of a single quiescent state variable to a file. */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
+{
+ uint64_t bmap;
+ uint32_t i, t, id;
+
+ if (v == NULL || f == NULL) {
+ rte_log(RTE_LOG_ERR, rte_rcu_log_type,
+ "%s(): Invalid input parameter\n", __func__);
+ rte_errno = EINVAL;
+
+ return 1;
+ }
+
+ fprintf(f, "\nQuiescent State Variable @%p\n", v);
+
+ fprintf(f, " QS variable memory size = %zu\n",
+ rte_rcu_qsbr_get_memsize(v->max_threads));
+ fprintf(f, " Given # max threads = %u\n", v->max_threads);
+ fprintf(f, " Current # threads = %u\n", v->num_threads);
+
+ fprintf(f, " Registered thread IDs = ");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "%d ", id + t);
+
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ fprintf(f, "\n");
+
+ fprintf(f, " Token = %"PRIu64"\n",
+ __atomic_load_n(&v->token, __ATOMIC_ACQUIRE));
+
+ fprintf(f, "Quiescent State Counts for readers:\n");
+ for (i = 0; i < v->num_elems; i++) {
+ bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
+ __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+ while (bmap) {
+ t = __builtin_ctzl(bmap);
+ fprintf(f, "thread ID = %d, count = %"PRIu64", lock count = %u\n",
+ id + t,
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].cnt,
+ __ATOMIC_RELAXED),
+ __atomic_load_n(
+ &v->qsbr_cnt[id + t].lock_cnt,
+ __ATOMIC_RELAXED));
+ bmap &= ~(1UL << t);
+ }
+ }
+
+ return 0;
+}
+
+int rte_rcu_log_type;
+
+RTE_INIT(rte_rcu_register)
+{
+ rte_rcu_log_type = rte_log_register("lib.rcu");
+ if (rte_rcu_log_type >= 0)
+ rte_log_set_level(rte_rcu_log_type, RTE_LOG_ERR);
+}
diff --git a/lib/librte_rcu/rte_rcu_qsbr.h b/lib/librte_rcu/rte_rcu_qsbr.h
new file mode 100644
index 000000000..9727f4922
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_qsbr.h
@@ -0,0 +1,641 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#ifndef _RTE_RCU_QSBR_H_
+#define _RTE_RCU_QSBR_H_
+
+/**
+ * @file
+ * RTE Quiescent State Based Reclamation (QSBR)
+ *
+ * Quiescent State (QS) is any point in the thread execution
+ * where the thread does not hold a reference to a data structure
+ * in shared memory. While using lock-less data structures, the writer
+ * can safely free memory once all the reader threads have entered
+ * quiescent state.
+ *
+ * This library provides the ability for the readers to report quiescent
+ * state and for the writers to identify when all the readers have
+ * entered quiescent state.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+
+extern int rte_rcu_log_type;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+#define __RTE_RCU_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args)
+#else
+#define __RTE_RCU_DP_LOG(level, fmt, args...)
+#endif
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\
+ if (v->qsbr_cnt[thread_id].lock_cnt) \
+ rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \
+ "%s(): " fmt "\n", __func__, ## args); \
+} while (0)
+#else
+#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...)
+#endif
+
+/* Registered thread IDs are stored as a bitmap of 64b element array.
+ * Given thread id needs to be converted to index into the array and
+ * the id within the array element.
+ */
+#define __RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8)
+#define __RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \
+ RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \
+ __RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE)
+#define __RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \
+ ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i)
+#define __RTE_QSBR_THRID_INDEX_SHIFT 6
+#define __RTE_QSBR_THRID_MASK 0x3f
+#define RTE_QSBR_THRID_INVALID 0xffffffff
+
+/* Worker thread counter */
+struct rte_rcu_qsbr_cnt {
+ uint64_t cnt;
+ /**< Quiescent state counter. Value 0 indicates the thread is offline
+ * 64b counter is used to avoid adding more code to address
+ * counter overflow. Changing this to 32b would require additional
+ * changes to various APIs.
+ */
+ uint32_t lock_cnt;
+ /**< Lock counter. Used when CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled */
+} __rte_cache_aligned;
+
+#define __RTE_QSBR_CNT_THR_OFFLINE 0
+#define __RTE_QSBR_CNT_INIT 1
+
+/* RTE Quiescent State variable structure.
+ * This structure has two elements that vary in size based on the
+ * 'max_threads' parameter.
+ * 1) Quiescent state counter array
+ * 2) Register thread ID array
+ */
+struct rte_rcu_qsbr {
+ uint64_t token __rte_cache_aligned;
+ /**< Counter to allow for multiple concurrent quiescent state queries */
+
+ uint32_t num_elems __rte_cache_aligned;
+ /**< Number of elements in the thread ID array */
+ uint32_t num_threads;
+ /**< Number of threads currently using this QS variable */
+ uint32_t max_threads;
+ /**< Maximum number of threads using this QS variable */
+
+ struct rte_rcu_qsbr_cnt qsbr_cnt[0] __rte_cache_aligned;
+ /**< Quiescent state counter array of 'max_threads' elements */
+
+ /**< Registered thread IDs are stored in a bitmap array,
+ * after the quiescent state counter array.
+ */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the size of the memory occupied by a Quiescent State variable.
+ *
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * @return
+ * On success - size of memory in bytes required for this QS variable.
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0
+ */
+size_t __rte_experimental
+rte_rcu_qsbr_get_memsize(uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Initialize a Quiescent State (QS) variable.
+ *
+ * @param v
+ * QS variable
+ * @param max_threads
+ * Maximum number of threads reporting quiescent state on this variable.
+ * This should be the same value as passed to rte_rcu_qsbr_get_memsize.
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - max_threads is 0 or 'v' is NULL.
+ *
+ */
+int __rte_experimental
+rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Register a reader thread to report its quiescent state
+ * on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ * Any reader thread that wants to report its quiescent state must
+ * call this API. This can be called during initialization or as part
+ * of the packet processing loop.
+ *
+ * Note that rte_rcu_qsbr_thread_online must be called before the
+ * thread updates its quiescent state using rte_rcu_qsbr_quiescent.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable. thread_id is a value between 0 and (max_threads - 1).
+ * 'max_threads' is the parameter passed in 'rte_rcu_qsbr_init' API.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a reader thread, from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be called from the reader threads during shutdown.
+ * Ongoing quiescent state queries will stop waiting for the status from this
+ * unregistered reader thread.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will stop reporting its quiescent
+ * state on the QS variable.
+ */
+int __rte_experimental
+rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a registered reader thread, to the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * Any registered reader thread that wants to report its quiescent state must
+ * call this API before calling rte_rcu_qsbr_quiescent. This can be called
+ * during initialization or as part of the packet processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * The reader thread must call rte_rcu_thread_online API, after the blocking
+ * function call returns, to ensure that rte_rcu_qsbr_check API
+ * waits for the reader thread to update its quiescent state.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread with this thread ID will report its quiescent state on
+ * the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Copy the current value of token.
+ * The fence at the end of the function will ensure that
+ * the following will not move down after the load of any shared
+ * data structure.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_RELAXED);
+
+ /* __atomic_store_n(cnt, __ATOMIC_RELAXED) is used to ensure
+ * 'cnt' (64b) is accessed atomically.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELAXED);
+
+ /* The subsequent load of the data structure should not
+ * move above the store. Hence a store-load barrier
+ * is required.
+ * If the load of the data structure moves above the store,
+ * writer might not see that the reader is online, even though
+ * the reader is referencing the shared data structure.
+ */
+#ifdef RTE_ARCH_X86_64
+ /* rte_smp_mb() for x86 is lighter */
+ rte_smp_mb();
+#else
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Remove a registered reader thread from the list of threads reporting their
+ * quiescent state on a QS variable.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This can be called during initialization or as part of the packet
+ * processing loop.
+ *
+ * The reader thread must call rte_rcu_thread_offline API, before
+ * calling any functions that block, to ensure that rte_rcu_qsbr_check
+ * API does not wait indefinitely for the reader thread to update its QS.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * rte_rcu_qsbr_check API will not wait for the reader thread with
+ * this thread ID to report its quiescent state on the QS variable.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* The reader can go offline only after the load of the
+ * data structure is completed. i.e. any load of the
+ * data strcture can not move after this store.
+ */
+
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ __RTE_QSBR_CNT_THR_OFFLINE, __ATOMIC_RELEASE);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Acquire a lock for accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called before
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled a lock counter is incremented.
+ * Similarly rte_rcu_qsbr_unlock will decrement the counter. When the
+ * rte_rcu_qsbr_check API will verify that this counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_lock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Increment the lock counter */
+ __atomic_fetch_add(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_ACQUIRE);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Release a lock after accessing a shared data structure.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe.
+ *
+ * This API is provided to aid debugging. This should be called after
+ * accessing a shared data structure.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is enabled, rte_rcu_qsbr_unlock will
+ * decrement a lock counter. rte_rcu_qsbr_check API will verify that this
+ * counter is 0.
+ *
+ * When CONFIG_RTE_LIBRTE_RCU_DEBUG is disabled, this API will do nothing.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Reader thread id
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v,
+ __rte_unused unsigned int thread_id)
+{
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+#if defined(RTE_LIBRTE_RCU_DEBUG)
+ /* Decrement the lock counter */
+ __atomic_fetch_sub(&v->qsbr_cnt[thread_id].lock_cnt,
+ 1, __ATOMIC_RELEASE);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING,
+ "Lock counter %u. Nested locks?\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+#endif
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Ask the reader threads to report the quiescent state
+ * status.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from worker threads.
+ *
+ * @param v
+ * QS variable
+ * @return
+ * - This is the token for this call of the API. This should be
+ * passed to rte_rcu_qsbr_check API.
+ */
+static __rte_always_inline uint64_t __rte_experimental
+rte_rcu_qsbr_start(struct rte_rcu_qsbr *v)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL);
+
+ /* Release the changes to the shared data structure.
+ * This store release will ensure that changes to any data
+ * structure are visible to the workers before the token
+ * update is visible.
+ */
+ t = __atomic_add_fetch(&v->token, 1, __ATOMIC_RELEASE);
+
+ return t;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Update quiescent state for a reader thread.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * All the reader threads registered to report their quiescent state
+ * on the QS variable must call this API.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Update the quiescent state for the reader with this thread ID.
+ */
+static __rte_always_inline void __rte_experimental
+rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id)
+{
+ uint64_t t;
+
+ RTE_ASSERT(v != NULL && thread_id < v->max_threads);
+
+ __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n",
+ v->qsbr_cnt[thread_id].lock_cnt);
+
+ /* Acquire the changes to the shared data structure released
+ * by rte_rcu_qsbr_start.
+ * Later loads of the shared data structure should not move
+ * above this load. Hence, use load-acquire.
+ */
+ t = __atomic_load_n(&v->token, __ATOMIC_ACQUIRE);
+
+ /* Inform the writer that updates are visible to this reader.
+ * Prior loads of the shared data structure should not move
+ * beyond this store. Hence use store-release.
+ */
+ __atomic_store_n(&v->qsbr_cnt[thread_id].cnt,
+ t, __ATOMIC_RELEASE);
+
+ __RTE_RCU_DP_LOG(DEBUG, "%s: update: token = %"PRIu64", Thread ID = %d",
+ __func__, t, thread_id);
+}
+
+/* Check the quiescent state counter for registered threads only, assuming
+ * that not all threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_selective(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i, j, id;
+ uint64_t bmap;
+ uint64_t c;
+ uint64_t *reg_thread_id;
+
+ for (i = 0, reg_thread_id = __RTE_QSBR_THRID_ARRAY_ELM(v, 0);
+ i < v->num_elems;
+ i++, reg_thread_id++) {
+ /* Load the current registered thread bit map before
+ * loading the reader thread quiescent state counters.
+ */
+ bmap = __atomic_load_n(reg_thread_id, __ATOMIC_ACQUIRE);
+ id = i << __RTE_QSBR_THRID_INDEX_SHIFT;
+
+ while (bmap) {
+ j = __builtin_ctzl(bmap);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Bit Map = 0x%"PRIx64", Thread ID = %d",
+ __func__, t, wait, bmap, id + j);
+ c = __atomic_load_n(
+ &v->qsbr_cnt[id + j].cnt,
+ __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, id+j);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (unlikely(c !=
+ __RTE_QSBR_CNT_THR_OFFLINE && c < t)) {
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ /* This thread might have unregistered.
+ * Re-read the bitmap.
+ */
+ bmap = __atomic_load_n(reg_thread_id,
+ __ATOMIC_ACQUIRE);
+
+ continue;
+ }
+
+ bmap &= ~(1UL << j);
+ }
+ }
+
+ return 1;
+}
+
+/* Check the quiescent state counter for all threads, assuming that
+ * all the threads have registered.
+ */
+static __rte_always_inline int
+__rte_rcu_qsbr_check_all(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ uint32_t i;
+ struct rte_rcu_qsbr_cnt *cnt;
+ uint64_t c;
+
+ for (i = 0, cnt = v->qsbr_cnt; i < v->max_threads; i++, cnt++) {
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: check: token = %"PRIu64", wait = %d, Thread ID = %d",
+ __func__, t, wait, i);
+ while (1) {
+ c = __atomic_load_n(&cnt->cnt, __ATOMIC_ACQUIRE);
+ __RTE_RCU_DP_LOG(DEBUG,
+ "%s: status: token = %"PRIu64", wait = %d, Thread QS cnt = %"PRIu64", Thread ID = %d",
+ __func__, t, wait, c, i);
+ /* Counter is not checked for wrap-around condition
+ * as it is a 64b counter.
+ */
+ if (likely(c == __RTE_QSBR_CNT_THR_OFFLINE || c >= t))
+ break;
+
+ /* This thread is not in quiescent state */
+ if (!wait)
+ return 0;
+
+ rte_pause();
+ }
+ }
+
+ return 1;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Checks if all the reader threads have entered the quiescent state
+ * referenced by token.
+ *
+ * This is implemented as a lock-free function. It is multi-thread
+ * safe and can be called from the worker threads as well.
+ *
+ * If this API is called with 'wait' set to true, the following
+ * factors must be considered:
+ *
+ * 1) If the calling thread is also reporting the status on the
+ * same QS variable, it must update the quiescent state status, before
+ * calling this API.
+ *
+ * 2) In addition, while calling from multiple threads, only
+ * one of those threads can be reporting the quiescent state status
+ * on a given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param t
+ * Token returned by rte_rcu_qsbr_start API
+ * @param wait
+ * If true, block till all the reader threads have completed entering
+ * the quiescent state referenced by token 't'.
+ * @return
+ * - 0 if all reader threads have NOT passed through specified number
+ * of quiescent states.
+ * - 1 if all reader threads have passed through specified number
+ * of quiescent states.
+ */
+static __rte_always_inline int __rte_experimental
+rte_rcu_qsbr_check(struct rte_rcu_qsbr *v, uint64_t t, bool wait)
+{
+ RTE_ASSERT(v != NULL);
+
+ if (likely(v->num_threads == v->max_threads))
+ return __rte_rcu_qsbr_check_all(v, t, wait);
+ else
+ return __rte_rcu_qsbr_check_selective(v, t, wait);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Wait till the reader threads have entered quiescent state.
+ *
+ * This is implemented as a lock-free function. It is multi-thread safe.
+ * This API can be thought of as a wrapper around rte_rcu_qsbr_start and
+ * rte_rcu_qsbr_check APIs.
+ *
+ * If this API is called from multiple threads, only one of
+ * those threads can be reporting the quiescent state status on a
+ * given QS variable.
+ *
+ * @param v
+ * QS variable
+ * @param thread_id
+ * Thread ID of the caller if it is registered to report quiescent state
+ * on this QS variable (i.e. the calling thread is also part of the
+ * readside critical section). If not, pass RTE_QSBR_THRID_INVALID.
+ */
+void __rte_experimental
+rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump the details of a single QS variables to a file.
+ *
+ * It is NOT multi-thread safe.
+ *
+ * @param f
+ * A pointer to a file for output
+ * @param v
+ * QS variable
+ * @return
+ * On success - 0
+ * On error - 1 with error code set in rte_errno.
+ * Possible rte_errno codes are:
+ * - EINVAL - NULL parameters are passed
+ */
+int __rte_experimental
+rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RCU_QSBR_H_ */
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
new file mode 100644
index 000000000..f8b9ef2ab
--- /dev/null
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -0,0 +1,13 @@
+EXPERIMENTAL {
+ global:
+
+ rte_rcu_log_type;
+ rte_rcu_qsbr_dump;
+ rte_rcu_qsbr_get_memsize;
+ rte_rcu_qsbr_init;
+ rte_rcu_qsbr_synchronize;
+ rte_rcu_qsbr_thread_register;
+ rte_rcu_qsbr_thread_unregister;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index a379dd682..e067ce5ea 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'stack', 'vhost',
+ 'rcu', 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f020bb10c..7c9b4b538 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -97,6 +97,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
+_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 1/4] rcu: " Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-03 14:31 ` David Marchand
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
` (3 subsequent siblings)
6 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 +++++++++++++++++++++++
5 files changed, 1737 insertions(+)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 54f706792..68d6b4fbc 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -218,6 +218,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 80cdea5d1..4e8077cd2 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -85,6 +85,8 @@ test_sources = files('commands.c',
'test_power_acpi_cpufreq.c',
'test_power_kvm_vm.c',
'test_prefetch.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_reciprocal_division.c',
'test_reciprocal_division_perf.c',
'test_red.c',
@@ -134,6 +136,7 @@ test_deps = ['acl',
'metrics',
'pipeline',
'port',
+ 'rcu',
'reorder',
'ring',
'stack',
@@ -172,6 +175,7 @@ fast_parallel_test_names = [
'multiprocess_autotest',
'per_lcore_autotest',
'prefetch_autotest',
+ 'rcu_qsbr_autotest',
'red_autotest',
'ring_autotest',
'ring_pmd_autotest',
@@ -240,6 +244,7 @@ perf_test_names = [
'member_perf_autotest',
'efd_perf_autotest',
'lpm6_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..ed6934a47
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+/* Make sure that this has the same value as __RTE_QSBR_CNT_INIT */
+#define TEST_RCU_QSBR_CNT_INIT 1
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ int i;
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ t[i] = (struct rte_rcu_qsbr *)rte_zmalloc(NULL, sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..16a43f8db
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,704 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-03 14:31 ` David Marchand
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
From: Dharmik Thakkar <dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test/Makefile | 2 +
app/test/autotest_data.py | 12 +
app/test/meson.build | 5 +
app/test/test_rcu_qsbr.c | 1014 +++++++++++++++++++++++++++++++++
app/test/test_rcu_qsbr_perf.c | 704 +++++++++++++++++++++++
5 files changed, 1737 insertions(+)
create mode 100644 app/test/test_rcu_qsbr.c
create mode 100644 app/test/test_rcu_qsbr_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 54f706792..68d6b4fbc 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -218,6 +218,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
+SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c
+
SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y)
LDLIBS += -lrte_ipsec
diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py
index 72c56e528..fba66045f 100644
--- a/app/test/autotest_data.py
+++ b/app/test/autotest_data.py
@@ -700,6 +700,18 @@
"Func": default_autotest,
"Report": None,
},
+ {
+ "Name": "RCU QSBR autotest",
+ "Command": "rcu_qsbr_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
+ {
+ "Name": "RCU QSBR performance autotest",
+ "Command": "rcu_qsbr_perf_autotest",
+ "Func": default_autotest,
+ "Report": None,
+ },
#
# Please always make sure that ring_perf is the last test!
#
diff --git a/app/test/meson.build b/app/test/meson.build
index 80cdea5d1..4e8077cd2 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -85,6 +85,8 @@ test_sources = files('commands.c',
'test_power_acpi_cpufreq.c',
'test_power_kvm_vm.c',
'test_prefetch.c',
+ 'test_rcu_qsbr.c',
+ 'test_rcu_qsbr_perf.c',
'test_reciprocal_division.c',
'test_reciprocal_division_perf.c',
'test_red.c',
@@ -134,6 +136,7 @@ test_deps = ['acl',
'metrics',
'pipeline',
'port',
+ 'rcu',
'reorder',
'ring',
'stack',
@@ -172,6 +175,7 @@ fast_parallel_test_names = [
'multiprocess_autotest',
'per_lcore_autotest',
'prefetch_autotest',
+ 'rcu_qsbr_autotest',
'red_autotest',
'ring_autotest',
'ring_pmd_autotest',
@@ -240,6 +244,7 @@ perf_test_names = [
'member_perf_autotest',
'efd_perf_autotest',
'lpm6_perf_autotest',
+ 'rcu_qsbr_perf_autotest',
'red_perf',
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c
new file mode 100644
index 000000000..ed6934a47
--- /dev/null
+++ b/app/test/test_rcu_qsbr.c
@@ -0,0 +1,1014 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_QSBR_RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR file %s, line %d: " str "\n", __FILE__, \
+ __LINE__, ##__VA_ARGS__); \
+ return -1; \
+ } \
+} while (0)
+
+/* Make sure that this has the same value as __RTE_QSBR_CNT_INIT */
+#define TEST_RCU_QSBR_CNT_INIT 1
+
+#define TEST_RCU_MAX_LCORE 128
+uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static uint8_t writer_done;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+struct rte_hash *h[TEST_RCU_MAX_LCORE];
+char hash_name[TEST_RCU_MAX_LCORE][8];
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+alloc_rcu(void)
+{
+ int i;
+ uint32_t sz;
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ t[i] = (struct rte_rcu_qsbr *)rte_zmalloc(NULL, sz,
+ RTE_CACHE_LINE_SIZE);
+
+ return 0;
+}
+
+static int
+free_rcu(void)
+{
+ int i;
+
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_free(t[i]);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_get_memsize(void)
+{
+ uint32_t sz;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ sz = rte_rcu_qsbr_get_memsize(0);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 1), "Get Memsize for 0 threads");
+
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ /* For 128 threads,
+ * for machines with cache line size of 64B - 8384
+ * for machines with cache line size of 128 - 16768
+ */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((sz != 8384 && sz != 16768),
+ "Get Memsize");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_init: Initialize a QSBR variable.
+ */
+static int
+test_rcu_qsbr_init(void)
+{
+ int r;
+
+ printf("\nTest rte_rcu_qsbr_init()\n");
+
+ r = rte_rcu_qsbr_init(NULL, TEST_RCU_MAX_LCORE);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((r != 1), "NULL variable");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_register: Add a reader thread, to the list of threads
+ * reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_register(void)
+{
+ int ret;
+
+ printf("\nTest rte_rcu_qsbr_thread_register()\n");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_register(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register valid thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Valid thread id");
+
+ /* Re-registering should not return error */
+ ret = rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already registered thread id");
+
+ /* Register valid thread id - max allowed thread id */
+ ret = rte_rcu_qsbr_thread_register(t[0], TEST_RCU_MAX_LCORE - 1);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1), "Max thread id");
+
+ ret = rte_rcu_qsbr_thread_register(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_unregister: Remove a reader thread, from the list of
+ * threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_unregister(void)
+{
+ int i, j, ret;
+ uint64_t token;
+ uint8_t num_threads[3] = {1, TEST_RCU_MAX_LCORE, 1};
+
+ printf("\nTest rte_rcu_qsbr_thread_unregister()\n");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "NULL variable check");
+
+ ret = rte_rcu_qsbr_thread_unregister(NULL, 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ ret = rte_rcu_qsbr_thread_unregister(t[0], 100000);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "NULL variable, invalid thread id");
+
+ /* Find first disabled core */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ if (enabled_core_ids[i] == 0)
+ break;
+ }
+ /* Test with disabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "disabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], i);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /* Test with enabled lcore */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "enabled thread id");
+ /* Unregister already unregistered core */
+ ret = rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 1),
+ "Already unregistered core");
+
+ /*
+ * Test with different thread_ids:
+ * 1 - thread_id = 0
+ * 2 - All possible thread_ids, from 0 to TEST_RCU_MAX_LCORE
+ * 3 - thread_id = TEST_RCU_MAX_LCORE - 1
+ */
+ for (j = 0; j < 3; j++) {
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_register(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ /* Update quiescent state counter */
+ for (i = 0; i < num_threads[j]; i++) {
+ /* Skip one update */
+ if (i == (TEST_RCU_MAX_LCORE - 10))
+ continue;
+ rte_rcu_qsbr_quiescent(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+ }
+
+ if (j == 1) {
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+ /* Update the previously skipped thread */
+ rte_rcu_qsbr_quiescent(t[0], TEST_RCU_MAX_LCORE - 10);
+ }
+
+ /* Validate the updates */
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Non-blocking QSBR check");
+
+ for (i = 0; i < num_threads[j]; i++)
+ rte_rcu_qsbr_thread_unregister(t[0],
+ (j == 2) ? (TEST_RCU_MAX_LCORE - 1) : i);
+
+ /* Check with no thread registered */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0),
+ "Blocking QSBR check");
+ }
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_start: Ask the worker threads to report the quiescent state
+ * status.
+ */
+static int
+test_rcu_qsbr_start(void)
+{
+ uint64_t token;
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_start()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_check_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[read_type];
+
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[0]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[1]);
+ rte_rcu_qsbr_thread_unregister(temp, enabled_core_ids[2]);
+ rte_rcu_qsbr_quiescent(temp, enabled_core_ids[3]);
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_check: Checks if all the worker threads have entered the queis-
+ * cent state 'n' number of times. 'n' is provided in rte_rcu_qsbr_start API.
+ */
+static int
+test_rcu_qsbr_check(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_check()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+
+ ret = rte_rcu_qsbr_check(t[0], 0, false);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Token = 0");
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 2)), "QSBR Start");
+
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ /* Threads are offline, hence this should pass */
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Non-blocking QSBR check");
+
+ for (i = 0; i < 3; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], enabled_core_ids[i]);
+
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "Blocking QSBR check");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ for (i = 0; i < 4; i++)
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[i]);
+
+ token = rte_rcu_qsbr_start(t[0]);
+ TEST_RCU_QSBR_RETURN_IF_ERROR(
+ (token != (TEST_RCU_QSBR_CNT_INIT + 1)), "QSBR Start");
+
+ rte_eal_remote_launch(test_rcu_qsbr_check_reader, NULL,
+ enabled_core_ids[0]);
+
+ rte_eal_mp_wait_lcore();
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret != 1), "Blocking QSBR check");
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_synchronize_reader(void *arg)
+{
+ uint32_t lcore_id = rte_lcore_id();
+ (void)arg;
+
+ /* Register and become online */
+ rte_rcu_qsbr_thread_register(t[0], lcore_id);
+ rte_rcu_qsbr_thread_online(t[0], lcore_id);
+
+ while (!writer_done)
+ rte_rcu_qsbr_quiescent(t[0], lcore_id);
+
+ rte_rcu_qsbr_thread_offline(t[0], lcore_id);
+ rte_rcu_qsbr_thread_unregister(t[0], lcore_id);
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered
+ * the queiscent state.
+ */
+static int
+test_rcu_qsbr_synchronize(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_synchronize()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Test if the API returns when there are no threads reporting
+ * QS on the variable.
+ */
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when there are threads registered
+ * but not online.
+ */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns when the caller is also
+ * reporting the QS status.
+ */
+ rte_rcu_qsbr_thread_online(t[0], 0);
+ rte_rcu_qsbr_synchronize(t[0], 0);
+ rte_rcu_qsbr_thread_offline(t[0], 0);
+
+ /* Check the other boundary */
+ rte_rcu_qsbr_thread_online(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_synchronize(t[0], TEST_RCU_MAX_LCORE - 1);
+ rte_rcu_qsbr_thread_offline(t[0], TEST_RCU_MAX_LCORE - 1);
+
+ /* Test if the API returns after unregisterng all the threads */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_unregister(t[0], i);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ /* Test if the API returns with the live threads */
+ writer_done = 0;
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_synchronize_reader,
+ NULL, enabled_core_ids[i]);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+ rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID);
+
+ writer_done = 1;
+ rte_eal_mp_wait_lcore();
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_online: Add a registered reader thread, to
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_online(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("Test rte_rcu_qsbr_thread_online()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Register 2 threads to validate that only the
+ * online thread is waited upon.
+ */
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[1]);
+
+ /* Use qsbr_start to verify that the thread_online API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+
+ /* Check if the thread is online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ /* Make all the threads online */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread update");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_thread_offline: Remove a registered reader thread, from
+ * the list of threads reporting their quiescent state on a QS variable.
+ */
+static int
+test_rcu_qsbr_thread_offline(void)
+{
+ int i, ret;
+ uint64_t token;
+
+ printf("\nTest rte_rcu_qsbr_thread_offline()\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], enabled_core_ids[0]);
+
+ /* Use qsbr_start to verify that the thread_offline API
+ * succeeded.
+ */
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Check if the thread is offline */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread offline");
+
+ /* Bring an offline thread online and check if it can
+ * report QS.
+ */
+ rte_rcu_qsbr_thread_online(t[0], enabled_core_ids[0]);
+ /* Check if the online thread, can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ rte_rcu_qsbr_quiescent(t[0], enabled_core_ids[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline to online");
+
+ /*
+ * Check a sequence of online/status/offline/status/online/status
+ */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ token = rte_rcu_qsbr_start(t[0]);
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++) {
+ rte_rcu_qsbr_thread_register(t[0], i);
+ rte_rcu_qsbr_thread_online(t[0], i);
+ }
+
+ /* Check if all the threads are online */
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "thread online");
+
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "report QS");
+
+ /* Make all the threads offline */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_offline(t[0], i);
+ /* Make sure these threads are not being waited on */
+ token = rte_rcu_qsbr_start(t[0]);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "offline QS");
+
+ /* Make the threads online */
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_thread_online(t[0], i);
+ /* Check if all the online threads can report QS */
+ token = rte_rcu_qsbr_start(t[0]);
+ for (i = 0; i < TEST_RCU_MAX_LCORE; i++)
+ rte_rcu_qsbr_quiescent(t[0], i);
+ ret = rte_rcu_qsbr_check(t[0], token, true);
+ TEST_RCU_QSBR_RETURN_IF_ERROR((ret == 0), "online again");
+
+ return 0;
+}
+
+/*
+ * rte_rcu_qsbr_dump: Dump status of a single QS variable to a file
+ */
+static int
+test_rcu_qsbr_dump(void)
+{
+ int i;
+
+ printf("\nTest rte_rcu_qsbr_dump()\n");
+
+ /* Negative tests */
+ rte_rcu_qsbr_dump(NULL, t[0]);
+ rte_rcu_qsbr_dump(stdout, NULL);
+ rte_rcu_qsbr_dump(NULL, NULL);
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+ rte_rcu_qsbr_init(t[1], TEST_RCU_MAX_LCORE);
+
+ /* QS variable with 0 core mask */
+ rte_rcu_qsbr_dump(stdout, t[0]);
+
+ rte_rcu_qsbr_thread_register(t[0], enabled_core_ids[0]);
+
+ for (i = 1; i < 3; i++)
+ rte_rcu_qsbr_thread_register(t[1], enabled_core_ids[i]);
+
+ rte_rcu_qsbr_dump(stdout, t[0]);
+ rte_rcu_qsbr_dump(stdout, t[1]);
+ printf("\n");
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint32_t lcore_id = rte_lcore_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ do {
+ rte_rcu_qsbr_thread_register(temp, lcore_id);
+ rte_rcu_qsbr_thread_online(temp, lcore_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, lcore_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, lcore_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, lcore_id);
+ rte_rcu_qsbr_thread_offline(temp, lcore_id);
+ rte_rcu_qsbr_thread_unregister(temp, lcore_id);
+ } while (!writer_done);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer(void *arg)
+{
+ uint64_t token;
+ int32_t pos;
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ uint8_t writer_type = (uint8_t)((uintptr_t)arg);
+
+ temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE];
+ hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE];
+
+ /* Delete element from the shared data structure */
+ pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY));
+ if (pos < 0) {
+ printf("Delete key failed #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(temp);
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(temp, token, true);
+ if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != COUNTER_VALUE &&
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] != 0) {
+ printf("Reader did not complete #%d = %d\t", writer_type,
+ *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+
+ if (rte_hash_free_key_with_position(hash, pos) < 0) {
+ printf("Failed to free the key #%d\n",
+ keys[writer_type % TOTAL_ENTRY]);
+ return -1;
+ }
+ rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY]);
+ hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE]
+ [writer_type % TOTAL_ENTRY] = NULL;
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, simultaneous QSBR Queries
+ */
+static int
+test_rcu_qsbr_sw_sv_3qs(void)
+{
+ uint64_t token[3];
+ int i;
+ int32_t pos[3];
+
+ writer_done = 0;
+
+ printf("Test: 1 writer, 1 QSBR variable, simultaneous QSBR queries\n");
+
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < 4; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader, NULL,
+ enabled_core_ids[i]);
+
+ /* Delete element from the shared data structure */
+ pos[0] = rte_hash_del_key(h[0], keys + 0);
+ if (pos[0] < 0) {
+ printf("Delete key failed #%d\n", keys[0]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[0] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[1] = rte_hash_del_key(h[0], keys + 3);
+ if (pos[1] < 0) {
+ printf("Delete key failed #%d\n", keys[3]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[1] = rte_rcu_qsbr_start(t[0]);
+
+ /* Delete element from the shared data structure */
+ pos[2] = rte_hash_del_key(h[0], keys + 6);
+ if (pos[2] < 0) {
+ printf("Delete key failed #%d\n", keys[6]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token[2] = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[0], true);
+ if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) {
+ printf("Reader did not complete #0 = %d\n", *hash_data[0][0]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) {
+ printf("Failed to free the key #%d\n", keys[0]);
+ goto error;
+ }
+ rte_free(hash_data[0][0]);
+ hash_data[0][0] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[1], true);
+ if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) {
+ printf("Reader did not complete #3 = %d\n", *hash_data[0][3]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) {
+ printf("Failed to free the key #%d\n", keys[3]);
+ goto error;
+ }
+ rte_free(hash_data[0][3]);
+ hash_data[0][3] = NULL;
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token[2], true);
+ if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) {
+ printf("Reader did not complete #6 = %d\n", *hash_data[0][6]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) {
+ printf("Failed to free the key #%d\n", keys[6]);
+ goto error;
+ }
+ rte_free(hash_data[0][6]);
+ hash_data[0][6] = NULL;
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < 4; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ return -1;
+}
+
+/*
+ * Multi writer, Multiple QS variable, simultaneous QSBR queries
+ */
+static int
+test_rcu_qsbr_mw_mv_mqs(void)
+{
+ int i, j;
+ uint8_t test_cores;
+
+ writer_done = 0;
+ test_cores = num_cores / 4;
+ test_cores = test_cores * 4;
+
+ printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n"
+ , test_cores / 2, test_cores / 4);
+
+ for (i = 0; i < num_cores / 4; i++) {
+ rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE);
+ h[i] = init_hash(i);
+ if (h[i] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < test_cores / 2; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader,
+ (void *)(uintptr_t)(i / 2),
+ enabled_core_ids[i]);
+
+ /* Writer threads are launched */
+ for (; i < test_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer,
+ (void *)(uintptr_t)(i - (test_cores / 2)),
+ enabled_core_ids[i]);
+ /* Wait for writers to complete */
+ for (i = test_cores / 2; i < test_cores; i++)
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+
+ writer_done = 1;
+ /* Wait for readers to complete */
+ rte_eal_mp_wait_lcore();
+
+ /* Check return value from threads */
+ for (i = 0; i < test_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+
+ rte_free(keys);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ for (i = 0; i < num_cores / 4; i++)
+ rte_hash_free(h[i]);
+ rte_free(keys);
+ for (j = 0; j < TEST_RCU_MAX_LCORE; j++)
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[j][i]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ if (num_cores < 4) {
+ printf("Test failed! Need 4 or more cores\n");
+ goto test_fail;
+ }
+
+ /* Error-checking test cases */
+ if (test_rcu_qsbr_get_memsize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_init() < 0)
+ goto test_fail;
+
+ alloc_rcu();
+
+ if (test_rcu_qsbr_thread_register() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_unregister() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_start() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_check() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_synchronize() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_dump() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_online() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_thread_offline() < 0)
+ goto test_fail;
+
+ printf("\nFunctional tests\n");
+
+ if (test_rcu_qsbr_sw_sv_3qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_mw_mv_mqs() < 0)
+ goto test_fail;
+
+ free_rcu();
+
+ printf("\n");
+ return 0;
+
+test_fail:
+ free_rcu();
+
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_autotest, test_rcu_qsbr_main);
diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c
new file mode 100644
index 000000000..16a43f8db
--- /dev/null
+++ b/app/test/test_rcu_qsbr_perf.c
@@ -0,0 +1,704 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2018 Arm Limited
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <inttypes.h>
+#include <rte_pause.h>
+#include <rte_rcu_qsbr.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_malloc.h>
+#include <rte_cycles.h>
+#include <unistd.h>
+
+#include "test.h"
+
+/* Check condition and return an error if true. */
+#define TEST_RCU_MAX_LCORE 128
+static uint16_t enabled_core_ids[TEST_RCU_MAX_LCORE];
+static uint8_t num_cores;
+
+static uint32_t *keys;
+#define TOTAL_ENTRY (1024 * 8)
+#define COUNTER_VALUE 4096
+static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY];
+static volatile uint8_t writer_done;
+static volatile uint8_t all_registered;
+static volatile uint32_t thr_id;
+
+static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE];
+static struct rte_hash *h[TEST_RCU_MAX_LCORE];
+static char hash_name[TEST_RCU_MAX_LCORE][8];
+static rte_atomic64_t updates, checks;
+static rte_atomic64_t update_cycles, check_cycles;
+
+/* Scale down results to 1000 operations to support lower
+ * granularity clocks.
+ */
+#define RCU_SCALE_DOWN 1000
+
+/* Simple way to allocate thread ids in 0 to TEST_RCU_MAX_LCORE space */
+static inline uint32_t
+alloc_thread_id(void)
+{
+ uint32_t tmp_thr_id;
+
+ tmp_thr_id = __atomic_fetch_add(&thr_id, 1, __ATOMIC_RELAXED);
+ if (tmp_thr_id >= TEST_RCU_MAX_LCORE)
+ printf("Invalid thread id %u\n", tmp_thr_id);
+
+ return tmp_thr_id;
+}
+
+static inline int
+get_enabled_cores_mask(void)
+{
+ uint16_t core_id;
+ uint32_t max_cores = rte_lcore_count();
+
+ if (max_cores > TEST_RCU_MAX_LCORE) {
+ printf("Number of cores exceed %d\n", TEST_RCU_MAX_LCORE);
+ return -1;
+ }
+
+ core_id = 0;
+ num_cores = 0;
+ RTE_LCORE_FOREACH_SLAVE(core_id) {
+ enabled_core_ids[num_cores] = core_id;
+ num_cores++;
+ }
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_reader_perf(void *arg)
+{
+ bool writer_present = (bool)arg;
+ uint32_t thread_id = alloc_thread_id();
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ /* Register for report QS */
+ rte_rcu_qsbr_thread_register(t[0], thread_id);
+ /* Make the thread online */
+ rte_rcu_qsbr_thread_online(t[0], thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ if (writer_present) {
+ while (!writer_done) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ } else {
+ while (loop_cnt < 100000000) {
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(t[0], thread_id);
+ loop_cnt++;
+ }
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ /* Make the thread offline */
+ rte_rcu_qsbr_thread_offline(t[0], thread_id);
+ /* Unregister before exiting to avoid writer from waiting */
+ rte_rcu_qsbr_thread_unregister(t[0], thread_id);
+
+ return 0;
+}
+
+static int
+test_rcu_qsbr_writer_perf(void *arg)
+{
+ bool wait = (bool)arg;
+ uint64_t token = 0;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ /* Start the quiescent state query process */
+ if (wait)
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, wait);
+ loop_cnt++;
+ } while (loop_cnt < 20000000);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, loop_cnt);
+ return 0;
+}
+
+/*
+ * Perf test: Reader/writer
+ * Single writer, Multiple Readers, Single QS var, Non-Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_perf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ printf("\nPerf Test: %d Readers/1 Writer('wait' in qsbr_check == true)\n",
+ num_cores - 1);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores - 1;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores - 1; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, (void *)1,
+ enabled_core_ids[i]);
+
+ /* Writer thread is launched */
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)1, enabled_core_ids[i]);
+
+ /* Wait for the writer thread */
+ rte_eal_wait_lcore(enabled_core_ids[i]);
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test: Readers
+ * Single writer, Multiple readers, Single QS variable
+ */
+static int
+test_rcu_qsbr_rperf(void)
+{
+ int i, sz;
+ int tmp_num_cores;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf Test: %d Readers\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_reader_perf, NULL,
+ enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU updates = %"PRIi64"\n", rte_atomic64_read(&updates));
+ printf("Cycles per %d updates: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&update_cycles) /
+ (rte_atomic64_read(&updates) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * Perf test:
+ * Multiple writer, Single QS variable, Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_wperf(void)
+{
+ int i, sz;
+
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: %d Writers ('wait' in qsbr_check == false)\n",
+ num_cores);
+
+ /* Number of readers does not matter for QS variable in this test
+ * case as no reader will be registered.
+ */
+ sz = rte_rcu_qsbr_get_memsize(TEST_RCU_MAX_LCORE);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], TEST_RCU_MAX_LCORE);
+
+ /* Writer threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_writer_perf,
+ (void *)0, enabled_core_ids[i]);
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ printf("Total RCU checks = %"PRIi64"\n", rte_atomic64_read(&checks));
+ printf("Cycles per %d checks: %"PRIi64"\n", RCU_SCALE_DOWN,
+ rte_atomic64_read(&check_cycles) /
+ (rte_atomic64_read(&checks) / RCU_SCALE_DOWN));
+
+ rte_free(t[0]);
+
+ return 0;
+}
+
+/*
+ * RCU test cases using rte_hash data structure.
+ */
+static int
+test_rcu_qsbr_hash_reader(void *arg)
+{
+ struct rte_rcu_qsbr *temp;
+ struct rte_hash *hash = NULL;
+ int i;
+ uint64_t loop_cnt = 0;
+ uint64_t begin, cycles;
+ uint32_t thread_id = alloc_thread_id();
+ uint8_t read_type = (uint8_t)((uintptr_t)arg);
+ uint32_t *pdata;
+
+ temp = t[read_type];
+ hash = h[read_type];
+
+ rte_rcu_qsbr_thread_register(temp, thread_id);
+
+ begin = rte_rdtsc_precise();
+
+ do {
+ rte_rcu_qsbr_thread_online(temp, thread_id);
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ rte_rcu_qsbr_lock(temp, thread_id);
+ if (rte_hash_lookup_data(hash, keys+i,
+ (void **)&pdata) != -ENOENT) {
+ *pdata = 0;
+ while (*pdata < COUNTER_VALUE)
+ ++*pdata;
+ }
+ rte_rcu_qsbr_unlock(temp, thread_id);
+ }
+ /* Update quiescent state counter */
+ rte_rcu_qsbr_quiescent(temp, thread_id);
+ rte_rcu_qsbr_thread_offline(temp, thread_id);
+ loop_cnt++;
+ } while (!writer_done);
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&update_cycles, cycles);
+ rte_atomic64_add(&updates, loop_cnt);
+
+ rte_rcu_qsbr_thread_unregister(temp, thread_id);
+
+ return 0;
+}
+
+static struct rte_hash *
+init_hash(int hash_id)
+{
+ int i;
+ struct rte_hash *h = NULL;
+
+ sprintf(hash_name[hash_id], "hash%d", hash_id);
+ struct rte_hash_parameters hash_params = {
+ .entries = TOTAL_ENTRY,
+ .key_len = sizeof(uint32_t),
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ .hash_func = rte_hash_crc,
+ .extra_flag =
+ RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF,
+ .name = hash_name[hash_id],
+ };
+
+ h = rte_hash_create(&hash_params);
+ if (h == NULL) {
+ printf("Hash create Failed\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0);
+ if (hash_data[hash_id][i] == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+ }
+ keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_ENTRY, 0);
+ if (keys == NULL) {
+ printf("No memory\n");
+ return NULL;
+ }
+
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ keys[i] = i;
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ if (rte_hash_add_key_data(h, keys + i,
+ (void *)((uintptr_t)hash_data[hash_id][i]))
+ < 0) {
+ printf("Hash key add Failed #%d\n", i);
+ return NULL;
+ }
+ }
+ return h;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable Single QSBR query, Blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs(void)
+{
+ uint64_t token, begin, cycles;
+ int i, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ rte_atomic64_clear(&updates);
+ rte_atomic64_clear(&update_cycles);
+ rte_atomic64_clear(&checks);
+ rte_atomic64_clear(&check_cycles);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ printf("\nPerf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check\n", num_cores);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ rte_rcu_qsbr_check(t[0], token, true);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+/*
+ * Functional test:
+ * Single writer, Single QS variable, Single QSBR query,
+ * Non-blocking rcu_qsbr_check
+ */
+static int
+test_rcu_qsbr_sw_sv_1qs_non_blocking(void)
+{
+ uint64_t token, begin, cycles;
+ int i, ret, tmp_num_cores, sz;
+ int32_t pos;
+
+ writer_done = 0;
+
+ printf("Perf test: 1 writer, %d readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check\n", num_cores);
+
+ __atomic_store_n(&thr_id, 0, __ATOMIC_SEQ_CST);
+
+ if (all_registered == 1)
+ tmp_num_cores = num_cores;
+ else
+ tmp_num_cores = TEST_RCU_MAX_LCORE;
+
+ sz = rte_rcu_qsbr_get_memsize(tmp_num_cores);
+ t[0] = (struct rte_rcu_qsbr *)rte_zmalloc("rcu0", sz,
+ RTE_CACHE_LINE_SIZE);
+ /* QS variable is initialized */
+ rte_rcu_qsbr_init(t[0], tmp_num_cores);
+
+ /* Shared data structure created */
+ h[0] = init_hash(0);
+ if (h[0] == NULL) {
+ printf("Hash init failed\n");
+ goto error;
+ }
+
+ /* Reader threads are launched */
+ for (i = 0; i < num_cores; i++)
+ rte_eal_remote_launch(test_rcu_qsbr_hash_reader, NULL,
+ enabled_core_ids[i]);
+
+ begin = rte_rdtsc_precise();
+
+ for (i = 0; i < TOTAL_ENTRY; i++) {
+ /* Delete elements from the shared data structure */
+ pos = rte_hash_del_key(h[0], keys + i);
+ if (pos < 0) {
+ printf("Delete key failed #%d\n", keys[i]);
+ goto error;
+ }
+ /* Start the quiescent state query process */
+ token = rte_rcu_qsbr_start(t[0]);
+
+ /* Check the quiescent state status */
+ do {
+ ret = rte_rcu_qsbr_check(t[0], token, false);
+ } while (ret == 0);
+ if (*hash_data[0][i] != COUNTER_VALUE &&
+ *hash_data[0][i] != 0) {
+ printf("Reader did not complete #%d = %d\n", i,
+ *hash_data[0][i]);
+ goto error;
+ }
+
+ if (rte_hash_free_key_with_position(h[0], pos) < 0) {
+ printf("Failed to free the key #%d\n", keys[i]);
+ goto error;
+ }
+ rte_free(hash_data[0][i]);
+ hash_data[0][i] = NULL;
+ }
+
+ cycles = rte_rdtsc_precise() - begin;
+ rte_atomic64_add(&check_cycles, cycles);
+ rte_atomic64_add(&checks, i);
+
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+ /* Check return value from threads */
+ for (i = 0; i < num_cores; i++)
+ if (lcore_config[enabled_core_ids[i]].ret < 0)
+ goto error;
+ rte_hash_free(h[0]);
+ rte_free(keys);
+
+ printf("Following numbers include calls to rte_hash functions\n");
+ printf("Cycles per 1 update(online/update/offline): %"PRIi64"\n",
+ rte_atomic64_read(&update_cycles) /
+ rte_atomic64_read(&updates));
+
+ printf("Cycles per 1 check(start, check): %"PRIi64"\n\n",
+ rte_atomic64_read(&check_cycles) /
+ rte_atomic64_read(&checks));
+
+ rte_free(t[0]);
+
+ return 0;
+
+error:
+ writer_done = 1;
+ /* Wait until all readers have exited */
+ rte_eal_mp_wait_lcore();
+
+ rte_hash_free(h[0]);
+ rte_free(keys);
+ for (i = 0; i < TOTAL_ENTRY; i++)
+ rte_free(hash_data[0][i]);
+
+ rte_free(t[0]);
+
+ return -1;
+}
+
+static int
+test_rcu_qsbr_main(void)
+{
+ rte_atomic64_init(&updates);
+ rte_atomic64_init(&update_cycles);
+ rte_atomic64_init(&checks);
+ rte_atomic64_init(&check_cycles);
+
+ if (get_enabled_cores_mask() != 0)
+ return -1;
+
+ printf("Number of cores provided = %d\n", num_cores);
+ if (num_cores < 2) {
+ printf("Test failed! Need 2 or more cores\n");
+ goto test_fail;
+ }
+ if (num_cores > TEST_RCU_MAX_LCORE) {
+ printf("Test failed! %d cores supported\n", TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with all reader threads registered\n");
+ printf("--------------------------------------------\n");
+ all_registered = 1;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ /* Make sure the actual number of cores provided is less than
+ * TEST_RCU_MAX_LCORE. This will allow for some threads not
+ * to be registered on the QS variable.
+ */
+ if (num_cores >= TEST_RCU_MAX_LCORE) {
+ printf("Test failed! number of cores provided should be less than %d\n",
+ TEST_RCU_MAX_LCORE);
+ goto test_fail;
+ }
+
+ printf("Perf test with some of reader threads registered\n");
+ printf("------------------------------------------------\n");
+ all_registered = 0;
+
+ if (test_rcu_qsbr_perf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_rperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_wperf() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs() < 0)
+ goto test_fail;
+
+ if (test_rcu_qsbr_sw_sv_1qs_non_blocking() < 0)
+ goto test_fail;
+
+ printf("\n");
+
+ return 0;
+
+test_fail:
+ return -1;
+}
+
+REGISTER_TEST_COMMAND(rcu_qsbr_perf_autotest, test_rcu_qsbr_main);
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
` (2 preceding siblings ...)
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 11:37 ` Mcnamara, John
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
` (2 subsequent siblings)
6 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 11:37 ` Mcnamara, John
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Add lib_rcu QSBR API and programmer guide documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
---
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/img/rcu_general_info.svg | 509 ++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/rcu_lib.rst | 185 +++++++
5 files changed, 698 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/img/rcu_general_info.svg
create mode 100644 doc/guides/prog_guide/rcu_lib.rst
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index de1e215dd..8f0e84de6 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -54,7 +54,8 @@ The public API headers are grouped by topics:
[memzone] (@ref rte_memzone.h),
[mempool] (@ref rte_mempool.h),
[malloc] (@ref rte_malloc.h),
- [memcpy] (@ref rte_memcpy.h)
+ [memcpy] (@ref rte_memcpy.h),
+ [rcu] (@ref rte_rcu_qsbr.h)
- **timers**:
[cycles] (@ref rte_cycles.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 7722fc3e9..b9896cb63 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -51,6 +51,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_port \
@TOPDIR@/lib/librte_power \
@TOPDIR@/lib/librte_rawdev \
+ @TOPDIR@/lib/librte_rcu \
@TOPDIR@/lib/librte_reorder \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
diff --git a/doc/guides/prog_guide/img/rcu_general_info.svg b/doc/guides/prog_guide/img/rcu_general_info.svg
new file mode 100644
index 000000000..e7ca1dacb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rcu_general_info.svg
@@ -0,0 +1,509 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<!-- Generated by Microsoft Visio, SVG Export rcu_general_info.svg Page-1 -->
+
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2019 Arm Limited -->
+
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/" width="21.5in" height="16.5in" viewBox="0 0 1548 1188"
+ xml:space="preserve" color-interpolation-filters="sRGB" class="st21">
+ <v:documentProperties v:langID="1033" v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud v:nameU="msvSubprocessMaster" v:prompt="" v:val="VT4(Rectangle)"/>
+ <v:ud v:nameU="msvNoAutoConnect" v:val="VT0(1):26"/>
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style type="text/css">
+ <![CDATA[
+ .st1 {fill:#92d050;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st2 {fill:#ff0000;stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st3 {stroke:none;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75}
+ .st4 {fill:#ffffff;font-family:Calibri;font-size:1.81435em;font-weight:bold}
+ .st5 {fill:#333e48;font-family:Century Gothic;font-size:1.81435em}
+ .st6 {fill:#000000;font-family:Calibri;font-size:1.99578em;font-weight:bold}
+ .st7 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.45071}
+ .st8 {fill:#000000;font-family:Century Gothic;font-size:1.75001em}
+ .st9 {font-size:1em}
+ .st10 {stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st11 {fill:#333e48;font-family:Calibri;font-size:2.11672em;font-weight:bold}
+ .st12 {stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.90143}
+ .st13 {stroke:#b31166;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.725356}
+ .st14 {fill:#000000;font-family:Century Gothic;font-size:1.99999em}
+ .st15 {fill:#feffff;font-family:Calibri;font-size:1.99999em;font-weight:bold}
+ .st16 {marker-end:url(#mrkr5-239);marker-start:url(#mrkr5-237);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st17 {fill:#000000;fill-opacity:1;stroke:#000000;stroke-opacity:1;stroke-width:0.54347826086957}
+ .st18 {marker-end:url(#mrkr5-248);marker-start:url(#mrkr5-246);stroke:#651beb;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st19 {fill:#651beb;fill-opacity:1;stroke:#651beb;stroke-opacity:1;stroke-width:0.67567567567568}
+ .st20 {marker-end:url(#mrkr5-239);stroke:#000000;stroke-linecap:round;stroke-linejoin:round;stroke-width:3}
+ .st21 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+ ]]>
+ </style>
+
+ <defs id="Markers">
+ <g id="lend5">
+ <path d="M 2 1 L 0 0 L 1.98117 -0.993387 C 1.67173 -0.364515 1.67301 0.372641 1.98465 1.00043 " style="stroke:none"/>
+ </g>
+ <marker id="mrkr5-237" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.1" refX="3.1" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.84) "/>
+ </marker>
+ <marker id="mrkr5-239" class="st17" v:arrowType="5" v:arrowSize="2" v:setback="3.22" refX="-3.22" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.84,-1.84) "/>
+ </marker>
+ <marker id="mrkr5-246" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.47" refX="2.47" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(1.48) "/>
+ </marker>
+ <marker id="mrkr5-248" class="st19" v:arrowType="5" v:arrowSize="0" v:setback="2.59" refX="-2.59" orient="auto"
+ markerUnits="strokeWidth" overflow="visible">
+ <use xlink:href="#lend5" transform="scale(-1.48,-1.48) "/>
+ </marker>
+ </defs>
+ <g v:mID="0" v:index="1" v:groupContext="foregroundPage">
+ <v:userDefs>
+ <v:ud v:nameU="msvThemeOrder" v:val="VT0(0):26"/>
+ </v:userDefs>
+ <title>Page-1</title>
+ <v:pageProperties v:drawingScale="1" v:pageScale="1" v:drawingUnits="0" v:shadowOffsetX="9" v:shadowOffsetY="-9"/>
+ <v:layer v:name="Connector" v:index="0"/>
+ <g id="shape3-1" v:mID="3" v:groupContext="shape" transform="translate(327.227,-946.908)">
+ <title>Sheet.3</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape4-3" v:mID="4" v:groupContext="shape" transform="translate(460.665,-944.869)">
+ <title>Sheet.4</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape5-5" v:mID="5" v:groupContext="shape" transform="translate(519.302,-950.79)">
+ <title>Sheet.5</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape6-9" v:mID="6" v:groupContext="shape" transform="translate(612.438,-944.869)">
+ <title>Sheet.6</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape7-11" v:mID="7" v:groupContext="shape" transform="translate(664.388,-945.889)">
+ <title>Sheet.7</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape8-13" v:mID="8" v:groupContext="shape" transform="translate(723.025,-951.494)">
+ <title>Sheet.8</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape9-17" v:mID="9" v:groupContext="shape" transform="translate(814.123,-945.889)">
+ <title>Sheet.9</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape10-19" v:mID="10" v:groupContext="shape" transform="translate(27,-952.759)">
+ <title>Sheet.10</title>
+ <desc>Reader Thread 1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="146.259" cy="1169.64" width="292.52" height="36.7136"/>
+ <path d="M292.52 1151.29 L0 1151.29 L0 1188 L292.52 1188 L292.52 1151.29" class="st3"/>
+ <text x="58.76" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Reader Thread 1</text> </g>
+ <g id="shape11-23" v:mID="11" v:groupContext="shape" transform="translate(379.176,-863.295)">
+ <title>Sheet.11</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L124.27 1132.94 C129.36 1132.94 133.44 1137.08 133.44 1142.11
+ L133.44 1178.82 C133.44 1183.92 129.36 1188 124.27 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st1"/>
+ </g>
+ <g id="shape12-25" v:mID="12" v:groupContext="shape" transform="translate(512.614,-861.255)">
+ <title>Sheet.12</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape13-27" v:mID="13" v:groupContext="shape" transform="translate(561.284,-867.106)">
+ <title>Sheet.13</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape14-31" v:mID="14" v:groupContext="shape" transform="translate(664.388,-861.255)">
+ <title>Sheet.14</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape15-33" v:mID="15" v:groupContext="shape" transform="translate(716.337,-862.275)">
+ <title>Sheet.15</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape16-35" v:mID="16" v:groupContext="shape" transform="translate(775.009,-867.81)">
+ <title>Sheet.16</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape17-39" v:mID="17" v:groupContext="shape" transform="translate(866.073,-862.275)">
+ <title>Sheet.17</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape18-41" v:mID="18" v:groupContext="shape" transform="translate(143.348,-873.294)">
+ <title>Sheet.18</title>
+ <desc>T 2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 2</text> </g>
+ <g id="shape19-45" v:mID="19" v:groupContext="shape" transform="translate(474.188,-777.642)">
+ <title>Sheet.19</title>
+ <path d="M0 1143.01 C0 1138.04 4.07 1133.96 9.04 1133.96 L124.46 1133.96 C129.43 1133.96 133.44 1138.04 133.44 1143.01
+ L133.44 1179.01 C133.44 1183.99 129.43 1188 124.46 1188 L9.04 1188 C4.07 1188 0 1183.99 0 1179.01 L0 1143.01
+ Z" class="st1"/>
+ </g>
+ <g id="shape20-47" v:mID="20" v:groupContext="shape" transform="translate(608.645,-775.602)">
+ <title>Sheet.20</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape21-49" v:mID="21" v:groupContext="shape" transform="translate(666.862,-781.311)">
+ <title>Sheet.21</title>
+ <desc>D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D1</text> </g>
+ <g id="shape22-53" v:mID="22" v:groupContext="shape" transform="translate(760.418,-775.602)">
+ <title>Sheet.22</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape23-55" v:mID="23" v:groupContext="shape" transform="translate(812.367,-776.622)">
+ <title>Sheet.23</title>
+ <path d="M0 1142.11 C0 1137.08 4.14 1132.94 9.17 1132.94 L141.59 1132.94 C146.68 1132.94 150.75 1137.08 150.75 1142.11
+ L150.75 1178.82 C150.75 1183.92 146.68 1188 141.59 1188 L9.17 1188 C4.14 1188 0 1183.92 0 1178.82 L0 1142.11
+ Z" class="st2"/>
+ </g>
+ <g id="shape24-57" v:mID="24" v:groupContext="shape" transform="translate(870.584,-782.015)">
+ <title>Sheet.24</title>
+ <desc>D2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="23.7162" cy="1169.64" width="47.44" height="36.7141"/>
+ <path d="M47.43 1151.29 L0 1151.29 L0 1188 L47.43 1188 L47.43 1151.29" class="st3"/>
+ <text x="11.34" y="1176.17" class="st4" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>D2</text> </g>
+ <g id="shape25-61" v:mID="25" v:groupContext="shape" transform="translate(962.103,-776.622)">
+ <title>Sheet.25</title>
+ <path d="M0 1141.6 C0 1136.82 3.88 1132.94 8.66 1132.94 L43.29 1132.94 C48.13 1132.94 51.95 1136.82 51.95 1141.6 L51.95
+ 1179.33 C51.95 1184.18 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1184.18 0 1179.33 L0 1141.6 Z"
+ class="st1"/>
+ </g>
+ <g id="shape26-63" v:mID="26" v:groupContext="shape" transform="translate(142.645,-787.5)">
+ <title>Sheet.26</title>
+ <desc>T 3</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="26.9796" cy="1169.64" width="53.96" height="36.7136"/>
+ <path d="M53.96 1151.29 L0 1151.29 L0 1188 L53.96 1188 L53.96 1151.29" class="st3"/>
+ <text x="13.3" y="1176.17" class="st5" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>T 3</text> </g>
+ <g id="shape28-67" v:mID="28" v:groupContext="shape" transform="translate(882.826,-574.263)">
+ <title>Sheet.28</title>
+ <desc>Time</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="21.32" y="1173.77" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Time</text> </g>
+ <g id="shape29-71" v:mID="29" v:groupContext="shape" transform="translate(419.545,-660.119)">
+ <title>Sheet.29</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape30-74" v:mID="30" v:groupContext="shape" transform="translate(419.545,-684.783)">
+ <title>Sheet.30</title>
+ <path d="M0 1188 L82.7 1187.36 L151.2 1172.07" class="st7"/>
+ </g>
+ <g id="shape31-77" v:mID="31" v:groupContext="shape" transform="translate(214.454,-663.095)">
+ <title>Sheet.31</title>
+ <desc>Remove reference to entry1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry1</tspan></text> </g>
+ <g id="shape33-82" v:mID="33" v:groupContext="shape" transform="translate(571.287,-681.326)">
+ <title>Sheet.33</title>
+ <path d="M0 738.67 L0 1188" class="st10"/>
+ </g>
+ <g id="shape34-85" v:mID="34" v:groupContext="shape" transform="translate(515.013,-1130.65)">
+ <title>Sheet.34</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="60.7243" cy="1166.58" width="121.45" height="42.8314"/>
+ <path d="M121.45 1145.17 L0 1145.17 L0 1188 L121.45 1188 L121.45 1145.17" class="st3"/>
+ <text x="26.02" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape35-89" v:mID="35" v:groupContext="shape" transform="translate(434.372,-1096.8)">
+ <title>Sheet.35</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape36-92" v:mID="36" v:groupContext="shape" transform="translate(434.372,-1100.37)">
+ <title>Sheet.36</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape37-95" v:mID="37" v:groupContext="shape" transform="translate(193.5,-1103.76)">
+ <title>Sheet.37</title>
+ <desc>Delete entry1 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry1 from D1</text> </g>
+ <g id="shape38-99" v:mID="38" v:groupContext="shape" transform="translate(714.3,-675.425)">
+ <title>Sheet.38</title>
+ <path d="M0 732.77 L0 1188" class="st10"/>
+ </g>
+ <g id="shape39-102" v:mID="39" v:groupContext="shape" transform="translate(795.979,-637.904)">
+ <title>Sheet.39</title>
+ <path d="M0 1112.54 L0 1188 L0 1112.54" class="st7"/>
+ </g>
+ <g id="shape40-105" v:mID="40" v:groupContext="shape" transform="translate(716.782,-675.425)">
+ <title>Sheet.40</title>
+ <path d="M79.2 1188 L52.71 1187.94 L0 1147.21" class="st7"/>
+ </g>
+ <g id="shape41-108" v:mID="41" v:groupContext="shape" transform="translate(803.572,-639.285)">
+ <title>Sheet.41</title>
+ <desc>Free memory for entries1 and 2 after every reader has gone th...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="172.421" cy="1152.51" width="344.85" height="70.9752"/>
+ <path d="M344.84 1117.02 L0 1117.02 L0 1188 L344.84 1188 L344.84 1117.02" class="st3"/>
+ <text x="0" y="1133.61" class="st8" v:langID="1033"><v:paragraph/><v:tabList/>Free memory for entries1 and 2 <tspan
+ x="0" dy="1.2em" class="st9">after every reader has gone </tspan><tspan x="0" dy="1.2em" class="st9">through at least 1 quiescent state </tspan> </text> </g>
+ <g id="shape46-114" v:mID="46" v:groupContext="shape" transform="translate(680.801,-1130.65)">
+ <title>Sheet.46</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="42.0169" cy="1166.58" width="84.04" height="42.8314"/>
+ <path d="M84.03 1145.17 L0 1145.17 L0 1188 L84.03 1188 L84.03 1145.17" class="st3"/>
+ <text x="18.89" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape48-118" v:mID="48" v:groupContext="shape" transform="translate(811.005,-1110.05)">
+ <title>Sheet.48</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape49-121" v:mID="49" v:groupContext="shape" transform="translate(658.61,-1083.99)">
+ <title>Sheet.49</title>
+ <path d="M153.05 1149.63 L113.7 1149.57 L0 1188" class="st7"/>
+ </g>
+ <g id="shape50-124" v:mID="50" v:groupContext="shape" transform="translate(798.359,-1110.46)">
+ <title>Sheet.50</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="107.799" cy="1167.81" width="215.6" height="40.3845"/>
+ <path d="M215.6 1147.62 L0 1147.62 L0 1188 L215.6 1188 L215.6 1147.62" class="st3"/>
+ <text x="43.79" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Grace Period</text> </g>
+ <g id="shape51-128" v:mID="51" v:groupContext="shape" transform="translate(599.196,-662.779)">
+ <title>Sheet.51</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape52-131" v:mID="52" v:groupContext="shape" transform="translate(464.931,-1052.95)">
+ <title>Sheet.52</title>
+ <path d="M0 1154.35 L0 1188 L0 1154.35" class="st7"/>
+ </g>
+ <g id="shape53-134" v:mID="53" v:groupContext="shape" transform="translate(464.931,-1056.52)">
+ <title>Sheet.53</title>
+ <path d="M0 1171.88 L84.54 1171.24 L136.43 1188" class="st7"/>
+ </g>
+ <g id="shape54-137" v:mID="54" v:groupContext="shape" transform="translate(225,-1058.76)">
+ <title>Sheet.54</title>
+ <desc>Delete entry2 from D1</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="114.75" cy="1175.76" width="229.5" height="24.4771"/>
+ <path d="M229.5 1163.52 L0 1163.52 L0 1188 L229.5 1188 L229.5 1163.52" class="st3"/>
+ <text x="3.88" y="1182.06" class="st8" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete entry2 from D1</text> </g>
+ <g id="shape56-141" v:mID="56" v:groupContext="shape" transform="translate(711.244,-662.779)">
+ <title>Sheet.56</title>
+ <path d="M0 732.77 L0 1188" class="st12"/>
+ </g>
+ <g id="shape57-144" v:mID="57" v:groupContext="shape" transform="translate(664.897,-1045.31)">
+ <title>Sheet.57</title>
+ <path d="M-0 1188 L146.76 1112.94" class="st13"/>
+ </g>
+ <g id="shape58-147" v:mID="58" v:groupContext="shape" transform="translate(619.059,-848.701)">
+ <title>Sheet.58</title>
+ <path d="M432.2 1184.24 L-0 1188" class="st7"/>
+ </g>
+ <g id="shape59-150" v:mID="59" v:groupContext="shape" transform="translate(1038.62,-837.364)">
+ <title>Sheet.59</title>
+ <desc>Critical sections</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="130" cy="1167.81" width="260.01" height="40.3845"/>
+ <path d="M260 1147.62 L0 1147.62 L0 1188 L260 1188 L260 1147.62" class="st3"/>
+ <text x="52.25" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Critical sections</text> </g>
+ <g id="shape60-154" v:mID="60" v:groupContext="shape" transform="translate(621.606,-848.828)">
+ <title>Sheet.60</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape61-157" v:mID="61" v:groupContext="shape" transform="translate(824.31,-849.848)">
+ <title>Sheet.61</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape62-160" v:mID="62" v:groupContext="shape" transform="translate(345.944,-933.143)">
+ <title>Sheet.62</title>
+ <path d="M705.32 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape63-163" v:mID="63" v:groupContext="shape" transform="translate(1038.62,-915.684)">
+ <title>Sheet.63</title>
+ <desc>Quiescent states</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="137.691" cy="1167.81" width="275.39" height="40.3845"/>
+ <path d="M275.38 1147.62 L0 1147.62 L0 1188 L275.38 1188 L275.38 1147.62" class="st3"/>
+ <text x="55.18" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Quiescent states</text> </g>
+ <g id="shape64-167" v:mID="64" v:groupContext="shape" transform="translate(346.581,-932.442)">
+ <title>Sheet.64</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape65-170" v:mID="65" v:groupContext="shape" transform="translate(621.606,-933.461)">
+ <title>Sheet.65</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape66-173" v:mID="66" v:groupContext="shape" transform="translate(856.905,-934.481)">
+ <title>Sheet.66</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape67-176" v:mID="67" v:groupContext="shape" transform="translate(472.82,-756.389)">
+ <title>Sheet.67</title>
+ <path d="M578.44 1188 L0 1187.43" class="st7"/>
+ </g>
+ <g id="shape68-179" v:mID="68" v:groupContext="shape" transform="translate(473.456,-755.688)">
+ <title>Sheet.68</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape69-182" v:mID="69" v:groupContext="shape" transform="translate(1016.87,-757.728)">
+ <title>Sheet.69</title>
+ <path d="M0 1173.53 L0 1188" class="st7"/>
+ </g>
+ <g id="shape70-185" v:mID="70" v:groupContext="shape" transform="translate(1060.04,-738.651)">
+ <title>Sheet.70</title>
+ <desc>while(1) loop</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1167.81" width="193.55" height="40.3845"/>
+ <path d="M193.55 1147.62 L0 1147.62 L0 1188 L193.55 1188 L193.55 1147.62" class="st3"/>
+ <text x="31.03" y="1174.99" class="st6" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>while(1) loop</text> </g>
+ <g id="shape71-189" v:mID="71" v:groupContext="shape" transform="translate(190.02,-464.886)">
+ <title>Sheet.71</title>
+ <path d="M0 1151.91 C0 1148.19 3.88 1145.17 8.66 1145.17 L43.29 1145.17 C48.13 1145.17 51.95 1148.19 51.95 1151.91 L51.95
+ 1181.26 C51.95 1185.03 48.13 1188 43.29 1188 L8.66 1188 C3.88 1188 0 1185.03 0 1181.26 L0 1151.91 Z"
+ class="st1"/>
+ </g>
+ <g id="shape72-191" v:mID="72" v:groupContext="shape" transform="translate(259.003,-466.895)">
+ <title>Sheet.72</title>
+ <desc>Reader thread is not accessing any shared data structure. i.e...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is not accessing any shared data structure.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. non critical section or quiescent state.</tspan></text> </g>
+ <g id="shape73-196" v:mID="73" v:groupContext="shape" transform="translate(190.02,-389.169)">
+ <title>Sheet.73</title>
+ <desc>Dx</desc>
+ <v:textBlock v:margins="rect(4,4,4,4)"/>
+ <v:textRect cx="25.9746" cy="1166.58" width="51.95" height="42.8314"/>
+ <path d="M0 1152.31 C0 1148.39 1.43 1145.17 3.16 1145.17 L48.79 1145.17 C50.55 1145.17 51.95 1148.39 51.95 1152.31 L51.95
+ 1180.86 C51.95 1184.83 50.55 1188 48.79 1188 L3.16 1188 C1.43 1188 0 1184.83 0 1180.86 L0 1152.31 Z"
+ class="st2"/>
+ <text x="12.9" y="1173.78" class="st15" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Dx</text> </g>
+ <g id="shape74-199" v:mID="74" v:groupContext="shape" transform="translate(259.003,-388.777)">
+ <title>Sheet.74</title>
+ <desc>Reader thread is accessing the shared data structure Dx. i.e....</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="344.967" cy="1169.45" width="689.94" height="37.1049"/>
+ <path d="M689.93 1150.9 L0 1150.9 L0 1188 L689.93 1188 L689.93 1150.9" class="st3"/>
+ <text x="0" y="1162.25" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Reader thread is accessing the shared data structure Dx.<v:newlineChar/><tspan
+ x="0" dy="1.2em" class="st9">i.e. critical section.</tspan></text> </g>
+ <g id="shape75-204" v:mID="75" v:groupContext="shape" transform="translate(289.017,-301.151)">
+ <title>Sheet.75</title>
+ <desc>Point in time when the reference to the entry is removed usin...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="332.491" cy="1160.47" width="664.99" height="55.0626"/>
+ <path d="M664.98 1132.94 L0 1132.94 L0 1188 L664.98 1188 L664.98 1132.94" class="st3"/>
+ <text x="0" y="1153.27" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the reference to the entry is removed <tspan
+ x="0" dy="1.2em" class="st9">using an atomic operation.</tspan></text> </g>
+ <g id="shape76-209" v:mID="76" v:groupContext="shape" transform="translate(177.543,-315.596)">
+ <title>Sheet.76</title>
+ <desc>Delete</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="45.9546" cy="1166.58" width="91.91" height="42.8314"/>
+ <path d="M91.91 1145.17 L0 1145.17 L0 1188 L91.91 1188 L91.91 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Delete</text> </g>
+ <g id="shape77-213" v:mID="77" v:groupContext="shape" transform="translate(288,-239.327)">
+ <title>Sheet.77</title>
+ <desc>Point in time when the writer can free the deleted entry.</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1175.01" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Point in time when the writer can free the deleted entry.</text> </g>
+ <g id="shape78-217" v:mID="78" v:groupContext="shape" transform="translate(177.543,-240.744)">
+ <title>Sheet.78</title>
+ <desc>Free</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="34.3786" cy="1166.58" width="68.76" height="42.8314"/>
+ <path d="M68.76 1145.17 L0 1145.17 L0 1188 L68.76 1188 L68.76 1145.17" class="st3"/>
+ <text x="11.25" y="1174.2" class="st11" v:langID="1033"><v:paragraph v:horizAlign="1"/><v:tabList/>Free</text> </g>
+ <g id="shape79-221" v:mID="79" v:groupContext="shape" transform="translate(289.228,-163.612)">
+ <title>Sheet.79</title>
+ <desc>Time duration between Delete and Free, during which memory ca...</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="328.5" cy="1167.81" width="657" height="40.3845"/>
+ <path d="M657 1147.62 L0 1147.62 L0 1188 L657 1188 L657 1147.62" class="st3"/>
+ <text x="0" y="1160.61" class="st14" v:langID="1033"><v:paragraph/><v:tabList/>Time duration between Delete and Free, during which <tspan
+ x="0" dy="1.2em" class="st9">memory cannot be freed.</tspan></text> </g>
+ <g id="shape80-226" v:mID="80" v:groupContext="shape" transform="translate(187.999,-162)">
+ <title>Sheet.80</title>
+ <desc>Grace Period</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="39.5985" cy="1166.58" width="79.2" height="42.8314"/>
+ <path d="M79.2 1145.17 L0 1145.17 L0 1188 L79.2 1188 L79.2 1145.17" class="st3"/>
+ <text x="0" y="1158.96" class="st11" v:langID="1033"><v:paragraph/><v:tabList/>Grace <tspan x="0" dy="1.2em"
+ class="st9">Period</tspan></text> </g>
+ <g id="shape83-231" v:mID="83" v:groupContext="shape" transform="translate(572.146,-1080.07)">
+ <title>Sheet.83</title>
+ <path d="M9.3 1188 L9.66 1188 L132.49 1188" class="st16"/>
+ </g>
+ <g id="shape84-240" v:mID="84" v:groupContext="shape" transform="translate(599.196,-1042.14)">
+ <title>Sheet.84</title>
+ <path d="M7.41 1188 L7.77 1188 L104.28 1188" class="st18"/>
+ </g>
+ <g id="shape85-249" v:mID="85" v:groupContext="shape" transform="translate(980.637,-595.338)">
+ <title>Sheet.85</title>
+ <path d="M0 1188 L92.16 1188" class="st20"/>
+ </g>
+ <g id="shape86-254" v:mID="86" v:groupContext="shape" transform="translate(444.835,-603.428)">
+ <title>Sheet.86</title>
+ <path d="M0 1145.17 L0 1188 L0 1145.17" class="st7"/>
+ </g>
+ <g id="shape87-257" v:mID="87" v:groupContext="shape" transform="translate(444.835,-637.489)">
+ <title>Sheet.87</title>
+ <path d="M0 1188 L84.43 1186.61 L154.36 1153.31" class="st7"/>
+ </g>
+ <g id="shape88-260" v:mID="88" v:groupContext="shape" transform="translate(241.369,-607.028)">
+ <title>Sheet.88</title>
+ <desc>Remove reference to entry2</desc>
+ <v:textBlock v:margins="rect(0,0,0,0)"/>
+ <v:textRect cx="96.7728" cy="1169.45" width="193.55" height="37.1049"/>
+ <path d="M193.55 1150.9 L0 1150.9 L0 1188 L193.55 1188 L193.55 1150.9" class="st3"/>
+ <text x="2.39" y="1163.15" class="st8" v:langID="1033"><v:paragraph v:horizAlign="2"/><v:tabList/>Remove reference <tspan
+ x="104.08" dy="1.2em" class="st9">to entry2</tspan></text> </g>
+ </g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 95f5e7964..17df2c563 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -56,6 +56,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ rcu_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
new file mode 100644
index 000000000..55d44e15d
--- /dev/null
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Arm Limited.
+
+.. _RCU_Library:
+
+RCU Library
+============
+
+Lock-less data structures provide scalability and determinism.
+They enable use cases where locking may not be allowed
+(for ex: real-time applications).
+
+In the following paras, the term 'memory' refers to memory allocated
+by typical APIs like malloc or anything that is representative of
+memory, for ex: an index of a free element array.
+
+Since these data structures are lock less, the writers and readers
+are accessing the data structures concurrently. Hence, while removing
+an element from a data structure, the writers cannot return the memory
+to the allocator, without knowing that the readers are not
+referencing that element/memory anymore. Hence, it is required to
+separate the operation of removing an element into 2 steps:
+
+Delete: in this step, the writer removes the reference to the element from
+the data structure but does not return the associated memory to the
+allocator. This will ensure that new readers will not get a reference to
+the removed element. Removing the reference is an atomic operation.
+
+Free(Reclaim): in this step, the writer returns the memory to the
+memory allocator, only after knowing that all the readers have stopped
+referencing the deleted element.
+
+This library helps the writer determine when it is safe to free the
+memory.
+
+This library makes use of thread Quiescent State (QS).
+
+What is Quiescent State
+-----------------------
+Quiescent State can be defined as 'any point in the thread execution where the
+thread does not hold a reference to shared memory'. It is up to the application
+to determine its quiescent state.
+
+Let us consider the following diagram:
+
+.. figure:: img/rcu_general_info.*
+
+
+As shown, reader thread 1 accesses data structures D1 and D2. When it is
+accessing D1, if the writer has to remove an element from D1, the
+writer cannot free the memory associated with that element immediately.
+The writer can return the memory to the allocator only after the reader
+stops referencing D1. In other words, reader thread RT1 has to enter
+a quiescent state.
+
+Similarly, since reader thread 2 is also accessing D1, writer has to
+wait till thread 2 enters quiescent state as well.
+
+However, the writer does not need to wait for reader thread 3 to enter
+quiescent state. Reader thread 3 was not accessing D1 when the delete
+operation happened. So, reader thread 1 will not have a reference to the
+deleted entry.
+
+It can be noted that, the critical sections for D2 is a quiescent state
+for D1. i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state.
+
+Since memory is not freed immediately, there might be a need for
+provisioning of additional memory, depending on the application requirements.
+
+Factors affecting RCU mechanism
+---------------------------------
+
+It is important to make sure that this library keeps the overhead of
+identifying the end of grace period and subsequent freeing of memory,
+to a minimum. The following paras explain how grace period and critical
+section affect this overhead.
+
+The writer has to poll the readers to identify the end of grace period.
+Polling introduces memory accesses and wastes CPU cycles. The memory
+is not available for reuse during grace period. Longer grace periods
+exasperate these conditions.
+
+The length of the critical section and the number of reader threads
+is proportional to the duration of the grace period. Keeping the critical
+sections smaller will keep the grace period smaller. However, keeping the
+critical sections smaller requires additional CPU cycles (due to additional
+reporting) in the readers.
+
+Hence, we need the characteristics of small grace period and large critical
+section. This library addresses this by allowing the writer to do
+other work without having to block till the readers report their quiescent
+state.
+
+RCU in DPDK
+-----------
+
+For DPDK applications, the start and end of while(1) loop (where no
+references to shared data structures are kept) act as perfect quiescent
+states. This will combine all the shared data structure accesses into a
+single, large critical section which helps keep the overhead on the
+reader side to a minimum.
+
+DPDK supports pipeline model of packet processing and service cores.
+In these use cases, a given data structure may not be used by all the
+workers in the application. The writer does not have to wait for all
+the workers to report their quiescent state. To provide the required
+flexibility, this library has a concept of QS variable. The application
+can create one QS variable per data structure to help it track the
+end of grace period for each data structure. This helps keep the grace
+period to a minimum.
+
+How to use this library
+-----------------------
+
+The application must allocate memory and initialize a QS variable.
+
+Application can call ``rte_rcu_qsbr_get_memsize`` to calculate the size
+of memory to allocate. This API takes maximum number of reader threads,
+using this variable, as a parameter. Currently, a maximum of 1024 threads
+are supported.
+
+Further, the application can initialize a QS variable using the API
+``rte_rcu_qsbr_init``.
+
+Each reader thread is assumed to have a unique thread ID. Currently, the
+management of the thread ID (for ex: allocation/free) is left to the
+application. The thread ID should be in the range of 0 to
+maximum number of threads provided while creating the QS variable.
+The application could also use lcore_id as the thread ID where applicable.
+
+``rte_rcu_qsbr_thread_register`` API will register a reader thread
+to report its quiescent state. This can be called from a reader thread.
+A control plane thread can also call this on behalf of a reader thread.
+The reader thread must call ``rte_rcu_qsbr_thread_online`` API to start
+reporting its quiescent state.
+
+Some of the use cases might require the reader threads to make
+blocking API calls (for ex: while using eventdev APIs). The writer thread
+should not wait for such reader threads to enter quiescent state.
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` API, before calling
+blocking APIs. It can call ``rte_rcu_qsbr_thread_online`` API once the blocking
+API call returns.
+
+The writer thread can trigger the reader threads to report their quiescent
+state by calling the API ``rte_rcu_qsbr_start``. It is possible for multiple
+writer threads to query the quiescent state status simultaneously. Hence,
+``rte_rcu_qsbr_start`` returns a token to each caller.
+
+The writer thread must call ``rte_rcu_qsbr_check`` API with the token to
+get the current quiescent state status. Option to block till all the reader
+threads enter the quiescent state is provided. If this API indicates that
+all the reader threads have entered the quiescent state, the application
+can free the deleted entry.
+
+The APIs ``rte_rcu_qsbr_start`` and ``rte_rcu_qsbr_check`` are lock free.
+Hence, they can be called concurrently from multiple writers even while
+running as worker threads.
+
+The separation of triggering the reporting from querying the status provides
+the writer threads flexibility to do useful work instead of blocking for the
+reader threads to enter the quiescent state or go offline. This reduces the
+memory accesses due to continuous polling for the status.
+
+``rte_rcu_qsbr_synchronize`` API combines the functionality of
+``rte_rcu_qsbr_start`` and blocking ``rte_rcu_qsbr_check`` into a single API.
+This API triggers the reader threads to report their quiescent state and
+polls till all the readers enter the quiescent state or go offline. This
+API does not allow the writer to do useful work while waiting and
+introduces additional memory accesses due to continuous polling.
+
+The reader thread must call ``rte_rcu_qsbr_thread_offline`` and
+``rte_rcu_qsbr_thread_unregister`` APIs to remove itself from reporting its
+quiescent state. The ``rte_rcu_qsbr_check`` API will not wait for this reader
+thread to report the quiescent state status anymore.
+
+The reader threads should call ``rte_rcu_qsbr_quiescent`` API to indicate that
+they entered a quiescent state. This API checks if a writer has triggered a
+quiescent state query and update the state accordingly.
+
+The ``rte_rcu_qsbr_lock`` and ``rte_rcu_qsbr_unlock`` are empty functions.
+However, when ``CONFIG_RTE_LIBRTE_RCU_DEBUG`` is enabled, these APIs aid
+in debugging issues. One can mark the access to shared data structures on the
+reader side using these APIs. The ``rte_rcu_qsbr_quiescent`` will check if
+all the locks are unlocked.
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
` (3 preceding siblings ...)
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 11:31 ` Mcnamara, John
2019-05-01 12:15 ` [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism Neil Horman
2019-05-01 23:36 ` Thomas Monjalon
6 siblings, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Added RCU library addition to the release notes
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/rel_notes/release_19_05.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index d5ed564ab..687c01bc1 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -68,6 +68,13 @@ New Features
Added a new lock-free stack handler, which uses the newly added stack
library.
+* **Added RCU library.**
+
+ Added RCU library supporting quiescent state based memory reclamation method.
+ This library helps identify the quiescent state of the reader threads so
+ that the writers can free the memory associated with the lock free data
+ structures.
+
* **Updated KNI module and PMD.**
Updated the KNI kernel module to set the max_mtu according to the given
@@ -330,6 +337,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_port.so.3
librte_power.so.1
librte_rawdev.so.1
+ + librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
librte_sched.so.2
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
@ 2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 11:31 ` Mcnamara, John
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 3:54 UTC (permalink / raw)
To: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev
Cc: honnappa.nagarahalli, gavin.hu, dharmik.thakkar, malvika.gupta
Added RCU library addition to the release notes
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/rel_notes/release_19_05.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index d5ed564ab..687c01bc1 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -68,6 +68,13 @@ New Features
Added a new lock-free stack handler, which uses the newly added stack
library.
+* **Added RCU library.**
+
+ Added RCU library supporting quiescent state based memory reclamation method.
+ This library helps identify the quiescent state of the reader threads so
+ that the writers can free the memory associated with the lock free data
+ structures.
+
* **Updated KNI module and PMD.**
Updated the KNI kernel module to set the max_mtu according to the given
@@ -330,6 +337,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_port.so.3
librte_power.so.1
librte_rawdev.so.1
+ + librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
librte_sched.so.2
--
2.17.1
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
@ 2019-05-01 11:31 ` Mcnamara, John
2019-05-01 11:31 ` Mcnamara, John
1 sibling, 1 reply; 260+ messages in thread
From: Mcnamara, John @ 2019-05-01 11:31 UTC (permalink / raw)
To: Honnappa Nagarahalli, Ananyev, Konstantin, stephen, paulmck,
Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Honnappa Nagarahalli
> Sent: Wednesday, May 1, 2019 4:54 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com;
> dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes
>
> Added RCU library addition to the release notes
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes
2019-05-01 11:31 ` Mcnamara, John
@ 2019-05-01 11:31 ` Mcnamara, John
0 siblings, 0 replies; 260+ messages in thread
From: Mcnamara, John @ 2019-05-01 11:31 UTC (permalink / raw)
To: Honnappa Nagarahalli, Ananyev, Konstantin, stephen, paulmck,
Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Honnappa Nagarahalli
> Sent: Wednesday, May 1, 2019 4:54 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com;
> dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes
>
> Added RCU library addition to the release notes
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
@ 2019-05-01 11:37 ` Mcnamara, John
2019-05-01 11:37 ` Mcnamara, John
2019-05-01 21:20 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Mcnamara, John @ 2019-05-01 11:37 UTC (permalink / raw)
To: Honnappa Nagarahalli, Ananyev, Konstantin, stephen, paulmck,
Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Honnappa Nagarahalli
> Sent: Wednesday, May 1, 2019 4:54 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com;
> dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
>
> Add lib_rcu QSBR API and programmer guide documentation.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
Some minor comments below. Nothing blocking.
>
> +Delete: in this step, the writer removes the reference to the element
> +from the data structure but does not return the associated memory to
> +the allocator. This will ensure that new readers will not get a
> +reference to the removed element. Removing the reference is an atomic
> operation.
> +
> +Free(Reclaim): in this step, the writer returns the memory to the
> +memory allocator, only after knowing that all the readers have stopped
> +referencing the deleted element.
These would be better as a bullet or number list.
> +What is Quiescent State
> +-----------------------
> +Quiescent State can be defined as 'any point in the thread execution
> +where the thread does not hold a reference to shared memory'. It is up
> +to the application to determine its quiescent state.
> +
> +Let us consider the following diagram:
> +
> +.. figure:: img/rcu_general_info.*
The image would be better like this (as recommended in the docs: http://doc.dpdk.org/guides/contributing/documentation.html#images)
.. _figure_quiescent_state:
.. figure:: img/rcu_general_info.*
Phases in the Quiescent State model.
However, it isn't worth a re-spin. I'll send you on a file with the suggested changes.
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 11:37 ` Mcnamara, John
@ 2019-05-01 11:37 ` Mcnamara, John
2019-05-01 21:20 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Mcnamara, John @ 2019-05-01 11:37 UTC (permalink / raw)
To: Honnappa Nagarahalli, Ananyev, Konstantin, stephen, paulmck,
Kovacevic, Marko, dev
Cc: gavin.hu, dharmik.thakkar, malvika.gupta
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Honnappa Nagarahalli
> Sent: Wednesday, May 1, 2019 4:54 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> stephen@networkplumber.org; paulmck@linux.ibm.com; Kovacevic, Marko
> <marko.kovacevic@intel.com>; dev@dpdk.org
> Cc: honnappa.nagarahalli@arm.com; gavin.hu@arm.com;
> dharmik.thakkar@arm.com; malvika.gupta@arm.com
> Subject: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
>
> Add lib_rcu QSBR API and programmer guide documentation.
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
Some minor comments below. Nothing blocking.
>
> +Delete: in this step, the writer removes the reference to the element
> +from the data structure but does not return the associated memory to
> +the allocator. This will ensure that new readers will not get a
> +reference to the removed element. Removing the reference is an atomic
> operation.
> +
> +Free(Reclaim): in this step, the writer returns the memory to the
> +memory allocator, only after knowing that all the readers have stopped
> +referencing the deleted element.
These would be better as a bullet or number list.
> +What is Quiescent State
> +-----------------------
> +Quiescent State can be defined as 'any point in the thread execution
> +where the thread does not hold a reference to shared memory'. It is up
> +to the application to determine its quiescent state.
> +
> +Let us consider the following diagram:
> +
> +.. figure:: img/rcu_general_info.*
The image would be better like this (as recommended in the docs: http://doc.dpdk.org/guides/contributing/documentation.html#images)
.. _figure_quiescent_state:
.. figure:: img/rcu_general_info.*
Phases in the Quiescent State model.
However, it isn't worth a re-spin. I'll send you on a file with the suggested changes.
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
` (4 preceding siblings ...)
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
@ 2019-05-01 12:15 ` Neil Horman
2019-05-01 12:15 ` Neil Horman
2019-05-01 14:56 ` Honnappa Nagarahalli
2019-05-01 23:36 ` Thomas Monjalon
6 siblings, 2 replies; 260+ messages in thread
From: Neil Horman @ 2019-05-01 12:15 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
gavin.hu, dharmik.thakkar, malvika.gupta
On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
I know this is version 9 of the patch, so I'm sorry for the late comment, but I
have to ask: Why re-invent this wheel? There are already several Userspace RCU
libraries that are mature and carried by Linux and BSD distributions. Why would
we throw another one into DPDK instead of just using whats already available,
mature and stable?
Neil
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 12:15 ` [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism Neil Horman
@ 2019-05-01 12:15 ` Neil Horman
2019-05-01 14:56 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Neil Horman @ 2019-05-01 12:15 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
gavin.hu, dharmik.thakkar, malvika.gupta
On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> Lock-less data structures provide scalability and determinism.
> They enable use cases where locking may not be allowed
> (for ex: real-time applications).
>
I know this is version 9 of the patch, so I'm sorry for the late comment, but I
have to ask: Why re-invent this wheel? There are already several Userspace RCU
libraries that are mature and carried by Linux and BSD distributions. Why would
we throw another one into DPDK instead of just using whats already available,
mature and stable?
Neil
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 12:15 ` [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism Neil Horman
2019-05-01 12:15 ` Neil Horman
@ 2019-05-01 14:56 ` Honnappa Nagarahalli
2019-05-01 14:56 ` Honnappa Nagarahalli
2019-05-01 18:05 ` Neil Horman
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 14:56 UTC (permalink / raw)
To: Neil Horman
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> > Lock-less data structures provide scalability and determinism.
> > They enable use cases where locking may not be allowed (for ex:
> > real-time applications).
> >
> I know this is version 9 of the patch, so I'm sorry for the late comment, but I
> have to ask: Why re-invent this wheel? There are already several Userspace
Thanks Neil, for asking the question. This has been debated before. Please refer to [2] for more details.
liburcu [1] was explored as it seemed to be familiar to others in the community . I am not aware of any other library.
There are unique requirements in DPDK and there is still scope for improvement from what is available. I have explained this in the cover letter without making a direct comparison to liburcu. May be it is worth tweaking the documentation to call this out explicitly.
[1] https://liburcu.org/
[2] http://mails.dpdk.org/archives/dev/2018-November/119875.html
> RCU libraries that are mature and carried by Linux and BSD distributions.
> Why would we throw another one into DPDK instead of just using whats
> already available, mature and stable?
>
> Neil
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 14:56 ` Honnappa Nagarahalli
@ 2019-05-01 14:56 ` Honnappa Nagarahalli
2019-05-01 18:05 ` Neil Horman
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 14:56 UTC (permalink / raw)
To: Neil Horman
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
>
> On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> > Lock-less data structures provide scalability and determinism.
> > They enable use cases where locking may not be allowed (for ex:
> > real-time applications).
> >
> I know this is version 9 of the patch, so I'm sorry for the late comment, but I
> have to ask: Why re-invent this wheel? There are already several Userspace
Thanks Neil, for asking the question. This has been debated before. Please refer to [2] for more details.
liburcu [1] was explored as it seemed to be familiar to others in the community . I am not aware of any other library.
There are unique requirements in DPDK and there is still scope for improvement from what is available. I have explained this in the cover letter without making a direct comparison to liburcu. May be it is worth tweaking the documentation to call this out explicitly.
[1] https://liburcu.org/
[2] http://mails.dpdk.org/archives/dev/2018-November/119875.html
> RCU libraries that are mature and carried by Linux and BSD distributions.
> Why would we throw another one into DPDK instead of just using whats
> already available, mature and stable?
>
> Neil
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 14:56 ` Honnappa Nagarahalli
2019-05-01 14:56 ` Honnappa Nagarahalli
@ 2019-05-01 18:05 ` Neil Horman
2019-05-01 18:05 ` Neil Horman
2019-05-01 21:18 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: Neil Horman @ 2019-05-01 18:05 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Wed, May 01, 2019 at 02:56:48PM +0000, Honnappa Nagarahalli wrote:
> >
> > On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> > > Lock-less data structures provide scalability and determinism.
> > > They enable use cases where locking may not be allowed (for ex:
> > > real-time applications).
> > >
> > I know this is version 9 of the patch, so I'm sorry for the late comment, but I
> > have to ask: Why re-invent this wheel? There are already several Userspace
> Thanks Neil, for asking the question. This has been debated before. Please refer to [2] for more details.
>
> liburcu [1] was explored as it seemed to be familiar to others in the community . I am not aware of any other library.
>
> There are unique requirements in DPDK and there is still scope for improvement from what is available. I have explained this in the cover letter without making a direct comparison to liburcu. May be it is worth tweaking the documentation to call this out explicitly.
>
I think what you're referring to here is the need for multiple QSBR variables,
yes? I'm not sure thats, strictly speaking, a requirement. It seems like its a
performance improvement, but I'm not sure thats the case (see performance
numbers below).
Regarding performance, we can't keep using raw performance as a trump card for
all other aspects of the DPDK. This entire patch is meant to improve
performance, it seems like it would be worthwhile to gain the code consolidation
and reuse benefits for the minor performance hit.
Further to performance, I may be misreading this, but I ran the integrated
performance test you provided in this patch, as well as the benchmark tests for
liburcw (trimmed for easier reading here)
liburcw:
[nhorman@hmswarspite benchmark]$ ./test_urcu 7 1 1 -v -a 0 -a 1 -a 2 -a 3 -a 4 -a 5 -a 6 -a 7 -a 0
Adding CPU 0 affinity
Adding CPU 1 affinity
Adding CPU 2 affinity
Adding CPU 3 affinity
Adding CPU 4 affinity
Adding CPU 5 affinity
Adding CPU 6 affinity
Adding CPU 7 affinity
Adding CPU 0 affinity
running test for 1 seconds, 7 readers, 1 writers.
Writer delay : 0 loops.
Reader duration : 0 loops.
thread main , tid 22712
thread_begin reader, tid 22726
thread_begin reader, tid 22729
thread_begin reader, tid 22728
thread_begin reader, tid 22727
thread_begin reader, tid 22731
thread_begin reader, tid 22730
thread_begin reader, tid 22732
thread_begin writer, tid 22733
thread_end reader, tid 22729
thread_end reader, tid 22731
thread_end reader, tid 22730
thread_end reader, tid 22728
thread_end reader, tid 22727
thread_end writer, tid 22733
thread_end reader, tid 22726
thread_end reader, tid 22732
total number of reads : 1185640774, writes 264444
SUMMARY /home/nhorman/git/userspace-rcu/tests/benchmark/.libs/lt-test_urcu testdur 1 nr_readers 7 rdur 0 wdur 0 nr_writers 1 wdelay 0 nr_reads 1185640774 nr_writes 264444 nr_ops 1185905218
DPDK:
Perf test: 1 writer, 7 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 813407
Cycles per 1 check(start, check): 859679
Both of these tests qsbr rcu in each library using 7 readers and 1 writer. Its
a little bit of an apples to oranges comparison, as the tests run using slightly
different parameters, and produce different output statistics, but I think they
can be somewhat normalized. Primarily the stat that stuck out to me was the
DPDK Cycles per 1 update statistic, which I believe is effectively the number of
cycles spent in the test / the number of writer updates. On DPDK that number in
this test run works out to 813407. In the liburcw test, it reports the total
number of ops (cycles), and the number of writes completed within those cycles.
If we do the same division there we get 185905218 / 264444 = 4484. I may be
misreading something here, but that seems like a pretty significant write side
performance improvement over this implementation.
Neil
> [1] https://liburcu.org/
> [2] http://mails.dpdk.org/archives/dev/2018-November/119875.html
>
> > RCU libraries that are mature and carried by Linux and BSD distributions.
> > Why would we throw another one into DPDK instead of just using whats
> > already available, mature and stable?
> >
> > Neil
>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 18:05 ` Neil Horman
@ 2019-05-01 18:05 ` Neil Horman
2019-05-01 21:18 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: Neil Horman @ 2019-05-01 18:05 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
On Wed, May 01, 2019 at 02:56:48PM +0000, Honnappa Nagarahalli wrote:
> >
> > On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> > > Lock-less data structures provide scalability and determinism.
> > > They enable use cases where locking may not be allowed (for ex:
> > > real-time applications).
> > >
> > I know this is version 9 of the patch, so I'm sorry for the late comment, but I
> > have to ask: Why re-invent this wheel? There are already several Userspace
> Thanks Neil, for asking the question. This has been debated before. Please refer to [2] for more details.
>
> liburcu [1] was explored as it seemed to be familiar to others in the community . I am not aware of any other library.
>
> There are unique requirements in DPDK and there is still scope for improvement from what is available. I have explained this in the cover letter without making a direct comparison to liburcu. May be it is worth tweaking the documentation to call this out explicitly.
>
I think what you're referring to here is the need for multiple QSBR variables,
yes? I'm not sure thats, strictly speaking, a requirement. It seems like its a
performance improvement, but I'm not sure thats the case (see performance
numbers below).
Regarding performance, we can't keep using raw performance as a trump card for
all other aspects of the DPDK. This entire patch is meant to improve
performance, it seems like it would be worthwhile to gain the code consolidation
and reuse benefits for the minor performance hit.
Further to performance, I may be misreading this, but I ran the integrated
performance test you provided in this patch, as well as the benchmark tests for
liburcw (trimmed for easier reading here)
liburcw:
[nhorman@hmswarspite benchmark]$ ./test_urcu 7 1 1 -v -a 0 -a 1 -a 2 -a 3 -a 4 -a 5 -a 6 -a 7 -a 0
Adding CPU 0 affinity
Adding CPU 1 affinity
Adding CPU 2 affinity
Adding CPU 3 affinity
Adding CPU 4 affinity
Adding CPU 5 affinity
Adding CPU 6 affinity
Adding CPU 7 affinity
Adding CPU 0 affinity
running test for 1 seconds, 7 readers, 1 writers.
Writer delay : 0 loops.
Reader duration : 0 loops.
thread main , tid 22712
thread_begin reader, tid 22726
thread_begin reader, tid 22729
thread_begin reader, tid 22728
thread_begin reader, tid 22727
thread_begin reader, tid 22731
thread_begin reader, tid 22730
thread_begin reader, tid 22732
thread_begin writer, tid 22733
thread_end reader, tid 22729
thread_end reader, tid 22731
thread_end reader, tid 22730
thread_end reader, tid 22728
thread_end reader, tid 22727
thread_end writer, tid 22733
thread_end reader, tid 22726
thread_end reader, tid 22732
total number of reads : 1185640774, writes 264444
SUMMARY /home/nhorman/git/userspace-rcu/tests/benchmark/.libs/lt-test_urcu testdur 1 nr_readers 7 rdur 0 wdur 0 nr_writers 1 wdelay 0 nr_reads 1185640774 nr_writes 264444 nr_ops 1185905218
DPDK:
Perf test: 1 writer, 7 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking QSBR check
Following numbers include calls to rte_hash functions
Cycles per 1 update(online/update/offline): 813407
Cycles per 1 check(start, check): 859679
Both of these tests qsbr rcu in each library using 7 readers and 1 writer. Its
a little bit of an apples to oranges comparison, as the tests run using slightly
different parameters, and produce different output statistics, but I think they
can be somewhat normalized. Primarily the stat that stuck out to me was the
DPDK Cycles per 1 update statistic, which I believe is effectively the number of
cycles spent in the test / the number of writer updates. On DPDK that number in
this test run works out to 813407. In the liburcw test, it reports the total
number of ops (cycles), and the number of writes completed within those cycles.
If we do the same division there we get 185905218 / 264444 = 4484. I may be
misreading something here, but that seems like a pretty significant write side
performance improvement over this implementation.
Neil
> [1] https://liburcu.org/
> [2] http://mails.dpdk.org/archives/dev/2018-November/119875.html
>
> > RCU libraries that are mature and carried by Linux and BSD distributions.
> > Why would we throw another one into DPDK instead of just using whats
> > already available, mature and stable?
> >
> > Neil
>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 18:05 ` Neil Horman
2019-05-01 18:05 ` Neil Horman
@ 2019-05-01 21:18 ` Honnappa Nagarahalli
2019-05-01 21:18 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 21:18 UTC (permalink / raw)
To: Neil Horman
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> Subject: Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting
> QSBR mechanism
>
> On Wed, May 01, 2019 at 02:56:48PM +0000, Honnappa Nagarahalli wrote:
> > >
> > > On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> > > > Lock-less data structures provide scalability and determinism.
> > > > They enable use cases where locking may not be allowed (for ex:
> > > > real-time applications).
> > > >
> > > I know this is version 9 of the patch, so I'm sorry for the late
> > > comment, but I have to ask: Why re-invent this wheel? There are
> > > already several Userspace
> > Thanks Neil, for asking the question. This has been debated before. Please
> refer to [2] for more details.
> >
> > liburcu [1] was explored as it seemed to be familiar to others in the
> community . I am not aware of any other library.
> >
> > There are unique requirements in DPDK and there is still scope for
> improvement from what is available. I have explained this in the cover letter
> without making a direct comparison to liburcu. May be it is worth tweaking the
> documentation to call this out explicitly.
> >
> I think what you're referring to here is the need for multiple QSBR variables,
> yes? I'm not sure thats, strictly speaking, a requirement. It seems like its a
> performance improvement, but I'm not sure thats the case (see performance
> numbers below).
DPDK supports service cores feature and pipeline mode where a particular data structure is used by a subset of readers. These use cases affect the writer and reader (which are on the data plane) in the following ways:
1) The writer does not need to wait for all the readers to complete the quiescent state. Writer does not need to spend CPU cycles and add to memory bandwidth polling the unwanted readers. DPDK has uses cases where the writer is on the data plane as well.
2) The readers that do not use the data structure do not have to spend cycles reporting their quiescent state. Note that these are data plane cycles
Other than this, please read about how grace period and critical section affect the over head introduced by QSBR mechanism in the cover letter. It also explains how this library solves this issue.
This is discussed in the discussion thread I provided earlier.
>
> Regarding performance, we can't keep using raw performance as a trump card
IMO, performance is NOT a 'trump card'. The whole essence of DPDK is performance. If not for performance, would DPDK exist?
> for all other aspects of the DPDK. This entire patch is meant to improve
> performance, it seems like it would be worthwhile to gain the code
> consolidation and reuse benefits for the minor performance hit.
Apologies, I did not understand this. Can you please elaborate code consolidation part?
>
> Further to performance, I may be misreading this, but I ran the integrated
> performance test you provided in this patch, as well as the benchmark tests for
> liburcw (trimmed for easier reading here)
Just to be sure, I believe you are referring to *liburcu*
>
> liburcw:
> [nhorman@hmswarspite benchmark]$ ./test_urcu 7 1 1 -v -a 0 -a 1 -a 2 -a 3 -a
> 4 -a 5 -a 6 -a 7 -a 0 Adding CPU 0 affinity Adding CPU 1 affinity Adding CPU 2
> affinity Adding CPU 3 affinity Adding CPU 4 affinity Adding CPU 5 affinity
> Adding CPU 6 affinity Adding CPU 7 affinity Adding CPU 0 affinity running test
> for 1 seconds, 7 readers, 1 writers.
> Writer delay : 0 loops.
> Reader duration : 0 loops.
> thread main , tid 22712
> thread_begin reader, tid 22726
> thread_begin reader, tid 22729
> thread_begin reader, tid 22728
> thread_begin reader, tid 22727
> thread_begin reader, tid 22731
> thread_begin reader, tid 22730
> thread_begin reader, tid 22732
> thread_begin writer, tid 22733
> thread_end reader, tid 22729
> thread_end reader, tid 22731
> thread_end reader, tid 22730
> thread_end reader, tid 22728
> thread_end reader, tid 22727
> thread_end writer, tid 22733
> thread_end reader, tid 22726
> thread_end reader, tid 22732
> total number of reads : 1185640774, writes 264444
> SUMMARY /home/nhorman/git/userspace-rcu/tests/benchmark/.libs/lt-
> test_urcu testdur 1 nr_readers 7 rdur 0 wdur 0 nr_writers 1 wdelay
> 0 nr_reads 1185640774 nr_writes 264444 nr_ops 1185905218
>
> DPDK:
> Perf test: 1 writer, 7 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking
> QSBR check Following numbers include calls to rte_hash functions Cycles per 1
> update(online/update/offline): 813407 Cycles per 1 check(start, check):
> 859679
>
>
> Both of these tests qsbr rcu in each library using 7 readers and 1 writer. Its a
> little bit of an apples to oranges comparison, as the tests run using slightly
Thanks for running the test. Yes, it is apples to oranges comparison:
1) The test you are running is not the correct test assuming the code for this test is [3]
2) This is not QSBR
I suggest you use [4] for your testing. It also need further changes to match the test case in this patch. The function 'thr_reader' reports quiescent state every 1024 iterations, please change it to report every iteration.
After this you need to compare these results with the first test case in this patch.
[3] https://github.com/urcu/userspace-rcu/blob/master/tests/benchmark/test_urcu.c
[4] https://github.com/urcu/userspace-rcu/blob/master/tests/benchmark/test_urcu_qsbr.c
> different parameters, and produce different output statistics, but I think they
> can be somewhat normalized. Primarily the stat that stuck out to me was the
> DPDK Cycles per 1 update statistic, which I believe is effectively the number of
> cycles spent in the test / the number of writer updates. On DPDK that number
> in this test run works out to 813407. In the liburcw test, it reports the total
> number of ops (cycles), and the number of writes completed within those
> cycles.
> If we do the same division there we get 185905218 / 264444 = 4484. I may be
> misreading something here, but that seems like a pretty significant write side
Yes, you are misreading. 'number of ops' is not cycles. It is sum of 'nr_writes' and 'nr_reads'. The test runs for 1 sec (uses 'sleep'), so these are number of operations done in 1 sec. You need to normalize to number of cycles using this data.
> performance improvement over this implementation.
>
> Neil
>
> > [1] https://liburcu.org/
> > [2] http://mails.dpdk.org/archives/dev/2018-November/119875.html
> >
> > > RCU libraries that are mature and carried by Linux and BSD distributions.
> > > Why would we throw another one into DPDK instead of just using whats
> > > already available, mature and stable?
> > >
> > > Neil
> >
> >
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 21:18 ` Honnappa Nagarahalli
@ 2019-05-01 21:18 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 21:18 UTC (permalink / raw)
To: Neil Horman
Cc: konstantin.ananyev, stephen, paulmck, marko.kovacevic, dev,
Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> Subject: Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting
> QSBR mechanism
>
> On Wed, May 01, 2019 at 02:56:48PM +0000, Honnappa Nagarahalli wrote:
> > >
> > > On Tue, Apr 30, 2019 at 10:54:15PM -0500, Honnappa Nagarahalli wrote:
> > > > Lock-less data structures provide scalability and determinism.
> > > > They enable use cases where locking may not be allowed (for ex:
> > > > real-time applications).
> > > >
> > > I know this is version 9 of the patch, so I'm sorry for the late
> > > comment, but I have to ask: Why re-invent this wheel? There are
> > > already several Userspace
> > Thanks Neil, for asking the question. This has been debated before. Please
> refer to [2] for more details.
> >
> > liburcu [1] was explored as it seemed to be familiar to others in the
> community . I am not aware of any other library.
> >
> > There are unique requirements in DPDK and there is still scope for
> improvement from what is available. I have explained this in the cover letter
> without making a direct comparison to liburcu. May be it is worth tweaking the
> documentation to call this out explicitly.
> >
> I think what you're referring to here is the need for multiple QSBR variables,
> yes? I'm not sure thats, strictly speaking, a requirement. It seems like its a
> performance improvement, but I'm not sure thats the case (see performance
> numbers below).
DPDK supports service cores feature and pipeline mode where a particular data structure is used by a subset of readers. These use cases affect the writer and reader (which are on the data plane) in the following ways:
1) The writer does not need to wait for all the readers to complete the quiescent state. Writer does not need to spend CPU cycles and add to memory bandwidth polling the unwanted readers. DPDK has uses cases where the writer is on the data plane as well.
2) The readers that do not use the data structure do not have to spend cycles reporting their quiescent state. Note that these are data plane cycles
Other than this, please read about how grace period and critical section affect the over head introduced by QSBR mechanism in the cover letter. It also explains how this library solves this issue.
This is discussed in the discussion thread I provided earlier.
>
> Regarding performance, we can't keep using raw performance as a trump card
IMO, performance is NOT a 'trump card'. The whole essence of DPDK is performance. If not for performance, would DPDK exist?
> for all other aspects of the DPDK. This entire patch is meant to improve
> performance, it seems like it would be worthwhile to gain the code
> consolidation and reuse benefits for the minor performance hit.
Apologies, I did not understand this. Can you please elaborate code consolidation part?
>
> Further to performance, I may be misreading this, but I ran the integrated
> performance test you provided in this patch, as well as the benchmark tests for
> liburcw (trimmed for easier reading here)
Just to be sure, I believe you are referring to *liburcu*
>
> liburcw:
> [nhorman@hmswarspite benchmark]$ ./test_urcu 7 1 1 -v -a 0 -a 1 -a 2 -a 3 -a
> 4 -a 5 -a 6 -a 7 -a 0 Adding CPU 0 affinity Adding CPU 1 affinity Adding CPU 2
> affinity Adding CPU 3 affinity Adding CPU 4 affinity Adding CPU 5 affinity
> Adding CPU 6 affinity Adding CPU 7 affinity Adding CPU 0 affinity running test
> for 1 seconds, 7 readers, 1 writers.
> Writer delay : 0 loops.
> Reader duration : 0 loops.
> thread main , tid 22712
> thread_begin reader, tid 22726
> thread_begin reader, tid 22729
> thread_begin reader, tid 22728
> thread_begin reader, tid 22727
> thread_begin reader, tid 22731
> thread_begin reader, tid 22730
> thread_begin reader, tid 22732
> thread_begin writer, tid 22733
> thread_end reader, tid 22729
> thread_end reader, tid 22731
> thread_end reader, tid 22730
> thread_end reader, tid 22728
> thread_end reader, tid 22727
> thread_end writer, tid 22733
> thread_end reader, tid 22726
> thread_end reader, tid 22732
> total number of reads : 1185640774, writes 264444
> SUMMARY /home/nhorman/git/userspace-rcu/tests/benchmark/.libs/lt-
> test_urcu testdur 1 nr_readers 7 rdur 0 wdur 0 nr_writers 1 wdelay
> 0 nr_reads 1185640774 nr_writes 264444 nr_ops 1185905218
>
> DPDK:
> Perf test: 1 writer, 7 readers, 1 QSBR variable, 1 QSBR Query, Non-Blocking
> QSBR check Following numbers include calls to rte_hash functions Cycles per 1
> update(online/update/offline): 813407 Cycles per 1 check(start, check):
> 859679
>
>
> Both of these tests qsbr rcu in each library using 7 readers and 1 writer. Its a
> little bit of an apples to oranges comparison, as the tests run using slightly
Thanks for running the test. Yes, it is apples to oranges comparison:
1) The test you are running is not the correct test assuming the code for this test is [3]
2) This is not QSBR
I suggest you use [4] for your testing. It also need further changes to match the test case in this patch. The function 'thr_reader' reports quiescent state every 1024 iterations, please change it to report every iteration.
After this you need to compare these results with the first test case in this patch.
[3] https://github.com/urcu/userspace-rcu/blob/master/tests/benchmark/test_urcu.c
[4] https://github.com/urcu/userspace-rcu/blob/master/tests/benchmark/test_urcu_qsbr.c
> different parameters, and produce different output statistics, but I think they
> can be somewhat normalized. Primarily the stat that stuck out to me was the
> DPDK Cycles per 1 update statistic, which I believe is effectively the number of
> cycles spent in the test / the number of writer updates. On DPDK that number
> in this test run works out to 813407. In the liburcw test, it reports the total
> number of ops (cycles), and the number of writes completed within those
> cycles.
> If we do the same division there we get 185905218 / 264444 = 4484. I may be
> misreading something here, but that seems like a pretty significant write side
Yes, you are misreading. 'number of ops' is not cycles. It is sum of 'nr_writes' and 'nr_reads'. The test runs for 1 sec (uses 'sleep'), so these are number of operations done in 1 sec. You need to normalize to number of cycles using this data.
> performance improvement over this implementation.
>
> Neil
>
> > [1] https://liburcu.org/
> > [2] http://mails.dpdk.org/archives/dev/2018-November/119875.html
> >
> > > RCU libraries that are mature and carried by Linux and BSD distributions.
> > > Why would we throw another one into DPDK instead of just using whats
> > > already available, mature and stable?
> > >
> > > Neil
> >
> >
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 11:37 ` Mcnamara, John
2019-05-01 11:37 ` Mcnamara, John
@ 2019-05-01 21:20 ` Honnappa Nagarahalli
2019-05-01 21:20 ` Honnappa Nagarahalli
2019-05-01 21:32 ` Thomas Monjalon
1 sibling, 2 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 21:20 UTC (permalink / raw)
To: Mcnamara, John, Ananyev, Konstantin, stephen, paulmck, Kovacevic,
Marko, dev
Cc: Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > Subject: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
> >
> > Add lib_rcu QSBR API and programmer guide documentation.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
>
> Some minor comments below. Nothing blocking.
Thanks John for the comments.
Thomas, please let me know if you want me to re-spin. Otherwise, these will be fixed as a follow up patch.
>
>
> >
> > +Delete: in this step, the writer removes the reference to the element
> > +from the data structure but does not return the associated memory to
> > +the allocator. This will ensure that new readers will not get a
> > +reference to the removed element. Removing the reference is an atomic
> > operation.
> > +
> > +Free(Reclaim): in this step, the writer returns the memory to the
> > +memory allocator, only after knowing that all the readers have
> > +stopped referencing the deleted element.
>
> These would be better as a bullet or number list.
>
>
>
> > +What is Quiescent State
> > +-----------------------
> > +Quiescent State can be defined as 'any point in the thread execution
> > +where the thread does not hold a reference to shared memory'. It is
> > +up to the application to determine its quiescent state.
> > +
> > +Let us consider the following diagram:
> > +
> > +.. figure:: img/rcu_general_info.*
>
> The image would be better like this (as recommended in the docs:
> http://doc.dpdk.org/guides/contributing/documentation.html#images)
>
>
> .. _figure_quiescent_state:
>
> .. figure:: img/rcu_general_info.*
>
> Phases in the Quiescent State model.
>
>
> However, it isn't worth a re-spin. I'll send you on a file with the suggested
> changes.
>
>
> Acked-by: John McNamara <john.mcnamara@intel.com>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 21:20 ` Honnappa Nagarahalli
@ 2019-05-01 21:20 ` Honnappa Nagarahalli
2019-05-01 21:32 ` Thomas Monjalon
1 sibling, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-01 21:20 UTC (permalink / raw)
To: Mcnamara, John, Ananyev, Konstantin, stephen, paulmck, Kovacevic,
Marko, dev
Cc: Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Honnappa Nagarahalli, nd, nd
> > Subject: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
> >
> > Add lib_rcu QSBR API and programmer guide documentation.
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
>
> Some minor comments below. Nothing blocking.
Thanks John for the comments.
Thomas, please let me know if you want me to re-spin. Otherwise, these will be fixed as a follow up patch.
>
>
> >
> > +Delete: in this step, the writer removes the reference to the element
> > +from the data structure but does not return the associated memory to
> > +the allocator. This will ensure that new readers will not get a
> > +reference to the removed element. Removing the reference is an atomic
> > operation.
> > +
> > +Free(Reclaim): in this step, the writer returns the memory to the
> > +memory allocator, only after knowing that all the readers have
> > +stopped referencing the deleted element.
>
> These would be better as a bullet or number list.
>
>
>
> > +What is Quiescent State
> > +-----------------------
> > +Quiescent State can be defined as 'any point in the thread execution
> > +where the thread does not hold a reference to shared memory'. It is
> > +up to the application to determine its quiescent state.
> > +
> > +Let us consider the following diagram:
> > +
> > +.. figure:: img/rcu_general_info.*
>
> The image would be better like this (as recommended in the docs:
> http://doc.dpdk.org/guides/contributing/documentation.html#images)
>
>
> .. _figure_quiescent_state:
>
> .. figure:: img/rcu_general_info.*
>
> Phases in the Quiescent State model.
>
>
> However, it isn't worth a re-spin. I'll send you on a file with the suggested
> changes.
>
>
> Acked-by: John McNamara <john.mcnamara@intel.com>
>
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 21:20 ` Honnappa Nagarahalli
2019-05-01 21:20 ` Honnappa Nagarahalli
@ 2019-05-01 21:32 ` Thomas Monjalon
2019-05-01 21:32 ` Thomas Monjalon
1 sibling, 1 reply; 260+ messages in thread
From: Thomas Monjalon @ 2019-05-01 21:32 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, Mcnamara, John, Ananyev, Konstantin, stephen, paulmck,
Kovacevic, Marko, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
01/05/2019 23:20, Honnappa Nagarahalli:
> > > Subject: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
> > >
> > > Add lib_rcu QSBR API and programmer guide documentation.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
> >
> > Some minor comments below. Nothing blocking.
> Thanks John for the comments.
>
> Thomas, please let me know if you want me to re-spin. Otherwise, these will be fixed as a follow up patch.
I would like to try pushing it now with the updates from John.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
2019-05-01 21:32 ` Thomas Monjalon
@ 2019-05-01 21:32 ` Thomas Monjalon
0 siblings, 0 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-05-01 21:32 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, Mcnamara, John, Ananyev, Konstantin, stephen, paulmck,
Kovacevic, Marko, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, nd
01/05/2019 23:20, Honnappa Nagarahalli:
> > > Subject: [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation
> > >
> > > Add lib_rcu QSBR API and programmer guide documentation.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
> >
> > Some minor comments below. Nothing blocking.
> Thanks John for the comments.
>
> Thomas, please let me know if you want me to re-spin. Otherwise, these will be fixed as a follow up patch.
I would like to try pushing it now with the updates from John.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
` (5 preceding siblings ...)
2019-05-01 12:15 ` [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism Neil Horman
@ 2019-05-01 23:36 ` Thomas Monjalon
2019-05-01 23:36 ` Thomas Monjalon
6 siblings, 1 reply; 260+ messages in thread
From: Thomas Monjalon @ 2019-05-01 23:36 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
gavin.hu, dharmik.thakkar, malvika.gupta
01/05/2019 05:54, Honnappa Nagarahalli:
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (3):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
> doc: added RCU to the release notes
Applied with discussed doc changes, thanks.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism
2019-05-01 23:36 ` Thomas Monjalon
@ 2019-05-01 23:36 ` Thomas Monjalon
0 siblings, 0 replies; 260+ messages in thread
From: Thomas Monjalon @ 2019-05-01 23:36 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: dev, konstantin.ananyev, stephen, paulmck, marko.kovacevic,
gavin.hu, dharmik.thakkar, malvika.gupta
01/05/2019 05:54, Honnappa Nagarahalli:
> Dharmik Thakkar (1):
> test/rcu_qsbr: add API and functional tests
>
> Honnappa Nagarahalli (3):
> rcu: add RCU library supporting QSBR mechanism
> doc/rcu: add lib_rcu documentation
> doc: added RCU to the release notes
Applied with discussed doc changes, thanks.
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
@ 2019-05-03 14:31 ` David Marchand
2019-05-03 14:31 ` David Marchand
2019-05-06 23:16 ` Honnappa Nagarahalli
1 sibling, 2 replies; 260+ messages in thread
From: David Marchand @ 2019-05-03 14:31 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Ananyev, Konstantin, Stephen Hemminger, paulmck, Kovacevic,
Marko, dev, Gavin Hu, Dharmik Thakkar, Malvika Gupta,
Aaron Conole
On Wed, May 1, 2019 at 5:55 AM Honnappa Nagarahalli <
honnappa.nagarahalli@arm.com> wrote:
> From: Dharmik Thakkar <dharmik.thakkar@arm.com>
>
> Add API positive/negative test cases, functional tests and
> performance tests.
>
> Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
Did not investigate, but this test always fails on my laptop.
Tried multiple times, I caught one instance when the test was still burning
cpu after 5 minutes and I killed it.
Usually, it fails like this:
[dmarchan@dmarchan dpdk]$ ./master/app/test -c 0x1f -n 4 --no-huge
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/114840/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15d7 net_e1000_em
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_perf_autotest
Number of cores provided = 4
Perf test with all reader threads registered
--------------------------------------------
Perf Test: 3 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 945401635
Cycles per 1000 updates: 25157
Total RCU checks = 20000000
Cycles per 1000 checks: 396375
Perf Test: 4 Readers
Total RCU updates = 400000000
Cycles per 1000 updates: 6241
Perf test: 4 Writers ('wait' in qsbr_check == false)
Total RCU checks = 80000000
Cycles per 1000 checks: 21061
Perf test: 1 writer, 4 readers, 1 QSBR variable, 1 QSBR Query, Blocking
QSBR Check
Reader did not complete #28 = 4097
Test Failed
RTE>>
--
David Marchand
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
2019-05-03 14:31 ` David Marchand
@ 2019-05-03 14:31 ` David Marchand
2019-05-06 23:16 ` Honnappa Nagarahalli
1 sibling, 0 replies; 260+ messages in thread
From: David Marchand @ 2019-05-03 14:31 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Ananyev, Konstantin, Stephen Hemminger, paulmck, Kovacevic,
Marko, dev, Gavin Hu, Dharmik Thakkar, Malvika Gupta,
Aaron Conole
On Wed, May 1, 2019 at 5:55 AM Honnappa Nagarahalli <
honnappa.nagarahalli@arm.com> wrote:
> From: Dharmik Thakkar <dharmik.thakkar@arm.com>
>
> Add API positive/negative test cases, functional tests and
> performance tests.
>
> Signed-off-by: Malvika Gupta <malvika.gupta@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
Did not investigate, but this test always fails on my laptop.
Tried multiple times, I caught one instance when the test was still burning
cpu after 5 minutes and I killed it.
Usually, it fails like this:
[dmarchan@dmarchan dpdk]$ ./master/app/test -c 0x1f -n 4 --no-huge
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/114840/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15d7 net_e1000_em
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_perf_autotest
Number of cores provided = 4
Perf test with all reader threads registered
--------------------------------------------
Perf Test: 3 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 945401635
Cycles per 1000 updates: 25157
Total RCU checks = 20000000
Cycles per 1000 checks: 396375
Perf Test: 4 Readers
Total RCU updates = 400000000
Cycles per 1000 updates: 6241
Perf test: 4 Writers ('wait' in qsbr_check == false)
Total RCU checks = 80000000
Cycles per 1000 checks: 21061
Perf test: 1 writer, 4 readers, 1 QSBR variable, 1 QSBR Query, Blocking
QSBR Check
Reader did not complete #28 = 4097
Test Failed
RTE>>
--
David Marchand
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
2019-05-03 14:31 ` David Marchand
2019-05-03 14:31 ` David Marchand
@ 2019-05-06 23:16 ` Honnappa Nagarahalli
2019-05-06 23:16 ` Honnappa Nagarahalli
1 sibling, 1 reply; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-06 23:16 UTC (permalink / raw)
To: David Marchand
Cc: Ananyev, Konstantin, Stephen Hemminger, paulmck, Kovacevic,
Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Aaron Conole, nd
Summary of discussions with David.
I am not able to reproduce the 'Test Fail' issue. However, log indicates an issue in the test code. David has validated the patch and I will send it out soon.
The test case taking long time to complete happens when the code is compiled with '-O0 -g', but the test case itself completes. The test cases are currently tuned to take less time when compiled with '-O3'.
Thanks,
Honnappa
From: David Marchand <david.marchand@redhat.com>
Sent: Friday, May 3, 2019 9:31 AM
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Stephen Hemminger <stephen@networkplumber.org>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>; dev <dev@dpdk.org>; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>; Aaron Conole <aconole@redhat.com>
Subject: Re: [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
On Wed, May 1, 2019 at 5:55 AM Honnappa Nagarahalli <mailto:honnappa.nagarahalli@arm.com> wrote:
From: Dharmik Thakkar <mailto:dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <mailto:malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <mailto:dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <mailto:honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <mailto:gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <mailto:konstantin.ananyev@intel.com>
Did not investigate, but this test always fails on my laptop.
Tried multiple times, I caught one instance when the test was still burning cpu after 5 minutes and I killed it.
Usually, it fails like this:
[dmarchan@dmarchan dpdk]$ ./master/app/test -c 0x1f -n 4 --no-huge
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/114840/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15d7 net_e1000_em
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_perf_autotest
Number of cores provided = 4
Perf test with all reader threads registered
--------------------------------------------
Perf Test: 3 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 945401635
Cycles per 1000 updates: 25157
Total RCU checks = 20000000
Cycles per 1000 checks: 396375
Perf Test: 4 Readers
Total RCU updates = 400000000
Cycles per 1000 updates: 6241
Perf test: 4 Writers ('wait' in qsbr_check == false)
Total RCU checks = 80000000
Cycles per 1000 checks: 21061
Perf test: 1 writer, 4 readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check
Reader did not complete #28 = 4097
Test Failed
RTE>>
--
David Marchand
^ permalink raw reply [flat|nested] 260+ messages in thread
* Re: [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
2019-05-06 23:16 ` Honnappa Nagarahalli
@ 2019-05-06 23:16 ` Honnappa Nagarahalli
0 siblings, 0 replies; 260+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-06 23:16 UTC (permalink / raw)
To: David Marchand
Cc: Ananyev, Konstantin, Stephen Hemminger, paulmck, Kovacevic,
Marko, dev, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Malvika Gupta, Aaron Conole, nd
Summary of discussions with David.
I am not able to reproduce the 'Test Fail' issue. However, log indicates an issue in the test code. David has validated the patch and I will send it out soon.
The test case taking long time to complete happens when the code is compiled with '-O0 -g', but the test case itself completes. The test cases are currently tuned to take less time when compiled with '-O3'.
Thanks,
Honnappa
From: David Marchand <david.marchand@redhat.com>
Sent: Friday, May 3, 2019 9:31 AM
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Stephen Hemminger <stephen@networkplumber.org>; paulmck@linux.ibm.com; Kovacevic, Marko <marko.kovacevic@intel.com>; dev <dev@dpdk.org>; Gavin Hu (Arm Technology China) <Gavin.Hu@arm.com>; Dharmik Thakkar <Dharmik.Thakkar@arm.com>; Malvika Gupta <Malvika.Gupta@arm.com>; Aaron Conole <aconole@redhat.com>
Subject: Re: [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests
On Wed, May 1, 2019 at 5:55 AM Honnappa Nagarahalli <mailto:honnappa.nagarahalli@arm.com> wrote:
From: Dharmik Thakkar <mailto:dharmik.thakkar@arm.com>
Add API positive/negative test cases, functional tests and
performance tests.
Signed-off-by: Malvika Gupta <mailto:malvika.gupta@arm.com>
Signed-off-by: Dharmik Thakkar <mailto:dharmik.thakkar@arm.com>
Signed-off-by: Honnappa Nagarahalli <mailto:honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <mailto:gavin.hu@arm.com>
Acked-by: Konstantin Ananyev <mailto:konstantin.ananyev@intel.com>
Did not investigate, but this test always fails on my laptop.
Tried multiple times, I caught one instance when the test was still burning cpu after 5 minutes and I killed it.
Usually, it fails like this:
[dmarchan@dmarchan dpdk]$ ./master/app/test -c 0x1f -n 4 --no-huge
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /run/user/114840/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15d7 net_e1000_em
APP: HPET is not enabled, using TSC as default timer
RTE>>rcu_qsbr_perf_autotest
Number of cores provided = 4
Perf test with all reader threads registered
--------------------------------------------
Perf Test: 3 Readers/1 Writer('wait' in qsbr_check == true)
Total RCU updates = 945401635
Cycles per 1000 updates: 25157
Total RCU checks = 20000000
Cycles per 1000 checks: 396375
Perf Test: 4 Readers
Total RCU updates = 400000000
Cycles per 1000 updates: 6241
Perf test: 4 Writers ('wait' in qsbr_check == false)
Total RCU checks = 80000000
Cycles per 1000 checks: 21061
Perf test: 1 writer, 4 readers, 1 QSBR variable, 1 QSBR Query, Blocking QSBR Check
Reader did not complete #28 = 4097
Test Failed
RTE>>
--
David Marchand
^ permalink raw reply [flat|nested] 260+ messages in thread
end of thread, other threads:[~2019-05-06 23:16 UTC | newest]
Thread overview: 260+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-22 3:30 [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Honnappa Nagarahalli
2018-11-22 3:30 ` [dpdk-dev] [RFC 1/3] log: add TQS log type Honnappa Nagarahalli
2018-11-27 22:24 ` Stephen Hemminger
2018-11-28 5:58 ` Honnappa Nagarahalli
2018-11-22 3:30 ` [dpdk-dev] [RFC 2/3] tqs: add thread quiescent state library Honnappa Nagarahalli
2018-11-24 12:18 ` Ananyev, Konstantin
2018-11-27 21:32 ` Honnappa Nagarahalli
2018-11-28 15:25 ` Ananyev, Konstantin
2018-12-07 7:27 ` Honnappa Nagarahalli
2018-12-07 17:29 ` Stephen Hemminger
2018-12-11 6:40 ` Honnappa Nagarahalli
2018-12-13 12:26 ` Burakov, Anatoly
2018-12-18 4:30 ` Honnappa Nagarahalli
2018-12-18 6:31 ` Stephen Hemminger
2018-12-12 9:29 ` Ananyev, Konstantin
2018-12-13 7:39 ` Honnappa Nagarahalli
2018-12-17 13:14 ` Ananyev, Konstantin
2018-11-22 3:30 ` [dpdk-dev] [RFC 3/3] test/tqs: Add API and functional tests Honnappa Nagarahalli
[not found] ` <CGME20181122073110eucas1p17592400af6c0b807dc87e90d136575af@eucas1p1.samsung.com>
2018-11-22 7:31 ` [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library Ilya Maximets
2018-11-27 22:28 ` Stephen Hemminger
2018-11-27 22:49 ` Van Haaren, Harry
2018-11-28 5:31 ` Honnappa Nagarahalli
2018-11-28 23:23 ` Stephen Hemminger
2018-11-30 2:13 ` Honnappa Nagarahalli
2018-11-30 16:26 ` Luca Boccassi
2018-11-30 18:32 ` Stephen Hemminger
2018-11-30 20:20 ` Honnappa Nagarahalli
2018-11-30 20:56 ` Mattias Rönnblom
2018-11-30 23:44 ` Stephen Hemminger
2018-12-01 18:37 ` Honnappa Nagarahalli
2018-11-30 2:25 ` Honnappa Nagarahalli
2018-11-30 21:03 ` Mattias Rönnblom
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 0/2] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 1/2] " Honnappa Nagarahalli
2019-01-15 11:39 ` Ananyev, Konstantin
2019-01-15 20:43 ` Honnappa Nagarahalli
2019-01-16 15:56 ` Ananyev, Konstantin
2019-01-18 6:48 ` Honnappa Nagarahalli
2019-01-18 12:14 ` Ananyev, Konstantin
2019-01-24 17:15 ` Honnappa Nagarahalli
2019-01-24 18:05 ` Ananyev, Konstantin
2019-02-22 7:07 ` Honnappa Nagarahalli
2018-12-22 2:14 ` [dpdk-dev] [RFC v2 2/2] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2018-12-23 7:30 ` Stephen Hemminger
2018-12-23 16:25 ` Paul E. McKenney
2019-01-18 7:04 ` Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 0/5] rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 1/5] " Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 2/5] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 3/5] lib/rcu: add dynamic memory allocation capability Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 4/5] test/rcu_qsbr: modify test cases for dynamic memory allocation Honnappa Nagarahalli
2019-02-22 7:04 ` [dpdk-dev] [RFC v3 5/5] lib/rcu: fix the size of register thread ID array size Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 1/3] rcu: " Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-22 16:42 ` Ananyev, Konstantin
2019-03-22 16:42 ` Ananyev, Konstantin
2019-03-26 4:35 ` Honnappa Nagarahalli
2019-03-26 4:35 ` Honnappa Nagarahalli
2019-03-28 11:15 ` Ananyev, Konstantin
2019-03-28 11:15 ` Ananyev, Konstantin
2019-03-29 5:54 ` Honnappa Nagarahalli
2019-03-29 5:54 ` Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-19 4:52 ` [dpdk-dev] [PATCH 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-03-19 4:52 ` Honnappa Nagarahalli
2019-03-25 11:34 ` Kovacevic, Marko
2019-03-25 11:34 ` Kovacevic, Marko
2019-03-26 4:43 ` Honnappa Nagarahalli
2019-03-26 4:43 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 1/3] rcu: " Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
2019-03-27 5:52 ` [dpdk-dev] [PATCH v2 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-03-27 5:52 ` Honnappa Nagarahalli
2019-04-01 17:10 ` [dpdk-dev] [PATCH v3 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-01 17:10 ` Honnappa Nagarahalli
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 1/3] rcu: " Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-02 10:22 ` Ananyev, Konstantin
2019-04-02 10:22 ` Ananyev, Konstantin
2019-04-02 10:53 ` Ananyev, Konstantin
2019-04-02 10:53 ` Ananyev, Konstantin
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-02 10:55 ` Ananyev, Konstantin
2019-04-02 10:55 ` Ananyev, Konstantin
2019-04-01 17:11 ` [dpdk-dev] [PATCH v3 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-01 17:11 ` Honnappa Nagarahalli
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 1/3] rcu: " Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 18:14 ` Paul E. McKenney
2019-04-10 18:14 ` Paul E. McKenney
2019-04-11 4:35 ` Honnappa Nagarahalli
2019-04-11 4:35 ` Honnappa Nagarahalli
2019-04-11 15:26 ` Paul E. McKenney
2019-04-11 15:26 ` Paul E. McKenney
2019-04-12 20:21 ` Honnappa Nagarahalli
2019-04-12 20:21 ` Honnappa Nagarahalli
2019-04-15 16:51 ` Ananyev, Konstantin
2019-04-15 16:51 ` Ananyev, Konstantin
2019-04-15 19:46 ` Honnappa Nagarahalli
2019-04-15 19:46 ` Honnappa Nagarahalli
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-10 15:26 ` Stephen Hemminger
2019-04-10 15:26 ` Stephen Hemminger
2019-04-10 16:15 ` Honnappa Nagarahalli
2019-04-10 16:15 ` Honnappa Nagarahalli
2019-04-10 11:20 ` [dpdk-dev] [PATCH v4 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-10 11:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 1/3] rcu: " Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 22:06 ` Stephen Hemminger
2019-04-12 22:06 ` Stephen Hemminger
2019-04-12 22:24 ` Honnappa Nagarahalli
2019-04-12 22:24 ` Honnappa Nagarahalli
2019-04-12 23:06 ` Stephen Hemminger
2019-04-12 23:06 ` Stephen Hemminger
2019-04-15 12:24 ` Ananyev, Konstantin
2019-04-15 12:24 ` Ananyev, Konstantin
2019-04-15 15:38 ` Stephen Hemminger
2019-04-15 15:38 ` Stephen Hemminger
2019-04-15 17:39 ` Ananyev, Konstantin
2019-04-15 17:39 ` Ananyev, Konstantin
2019-04-15 18:56 ` Honnappa Nagarahalli
2019-04-15 18:56 ` Honnappa Nagarahalli
2019-04-15 21:26 ` Stephen Hemminger
2019-04-15 21:26 ` Stephen Hemminger
2019-04-16 5:29 ` Honnappa Nagarahalli
2019-04-16 5:29 ` Honnappa Nagarahalli
2019-04-16 14:54 ` Stephen Hemminger
2019-04-16 14:54 ` Stephen Hemminger
2019-04-16 16:56 ` Honnappa Nagarahalli
2019-04-16 16:56 ` Honnappa Nagarahalli
2019-04-16 21:22 ` Stephen Hemminger
2019-04-16 21:22 ` Stephen Hemminger
2019-04-17 1:45 ` Honnappa Nagarahalli
2019-04-17 1:45 ` Honnappa Nagarahalli
2019-04-17 13:39 ` Ananyev, Konstantin
2019-04-17 13:39 ` Ananyev, Konstantin
2019-04-17 14:02 ` Honnappa Nagarahalli
2019-04-17 14:02 ` Honnappa Nagarahalli
2019-04-17 14:18 ` Thomas Monjalon
2019-04-17 14:18 ` Thomas Monjalon
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-12 20:20 ` [dpdk-dev] [PATCH v5 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-12 20:20 ` Honnappa Nagarahalli
2019-04-15 17:29 ` [dpdk-dev] [PATCH v5 0/3] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
2019-04-15 17:29 ` Ananyev, Konstantin
2019-04-16 5:10 ` Honnappa Nagarahalli
2019-04-16 5:10 ` Honnappa Nagarahalli
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 " Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 1/3] rcu: " Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-19 19:19 ` Paul E. McKenney
2019-04-19 19:19 ` Paul E. McKenney
2019-04-23 1:08 ` Honnappa Nagarahalli
2019-04-23 1:08 ` Honnappa Nagarahalli
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-17 4:13 ` [dpdk-dev] [PATCH v6 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-17 4:13 ` Honnappa Nagarahalli
2019-04-21 16:40 ` [dpdk-dev] [PATCH v6 0/3] lib/rcu: add RCU library supporting QSBR mechanism Thomas Monjalon
2019-04-21 16:40 ` Thomas Monjalon
2019-04-25 14:18 ` Honnappa Nagarahalli
2019-04-25 14:18 ` Honnappa Nagarahalli
2019-04-25 14:27 ` Honnappa Nagarahalli
2019-04-25 14:27 ` Honnappa Nagarahalli
2019-04-25 14:38 ` David Marchand
2019-04-25 14:38 ` David Marchand
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 " Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 1/3] rcu: " Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 8:10 ` Paul E. McKenney
2019-04-23 8:10 ` Paul E. McKenney
2019-04-23 21:23 ` Honnappa Nagarahalli
2019-04-23 21:23 ` Honnappa Nagarahalli
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
2019-04-24 20:02 ` Jerin Jacob Kollanukkaran
2019-04-25 5:15 ` Honnappa Nagarahalli
2019-04-25 5:15 ` Honnappa Nagarahalli
2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
2019-04-24 10:03 ` Ruifeng Wang (Arm Technology China)
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 2/3] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-23 4:31 ` [dpdk-dev] [PATCH v7 3/3] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-23 4:31 ` Honnappa Nagarahalli
2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
2019-04-24 10:12 ` Ruifeng Wang (Arm Technology China)
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 1/4] rcu: " Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
2019-04-26 8:13 ` Jerin Jacob Kollanukkaran
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2019-04-28 3:25 ` Ruifeng Wang (Arm Technology China)
2019-04-29 20:33 ` Thomas Monjalon
2019-04-29 20:33 ` Thomas Monjalon
2019-04-30 10:51 ` Hemant Agrawal
2019-04-30 10:51 ` Hemant Agrawal
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-29 20:35 ` Thomas Monjalon
2019-04-29 20:35 ` Thomas Monjalon
2019-04-30 4:20 ` Honnappa Nagarahalli
2019-04-30 4:20 ` Honnappa Nagarahalli
2019-04-30 7:58 ` Thomas Monjalon
2019-04-30 7:58 ` Thomas Monjalon
2019-04-26 4:39 ` [dpdk-dev] [PATCH v8 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-04-26 4:39 ` Honnappa Nagarahalli
2019-04-26 4:40 ` [dpdk-dev] [PATCH v8 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
2019-04-26 4:40 ` Honnappa Nagarahalli
2019-04-26 12:04 ` [dpdk-dev] [PATCH v8 0/4] lib/rcu: add RCU library supporting QSBR mechanism Ananyev, Konstantin
2019-04-26 12:04 ` Ananyev, Konstantin
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 " Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 1/4] rcu: " Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 2/4] test/rcu_qsbr: add API and functional tests Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-03 14:31 ` David Marchand
2019-05-03 14:31 ` David Marchand
2019-05-06 23:16 ` Honnappa Nagarahalli
2019-05-06 23:16 ` Honnappa Nagarahalli
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 3/4] doc/rcu: add lib_rcu documentation Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 11:37 ` Mcnamara, John
2019-05-01 11:37 ` Mcnamara, John
2019-05-01 21:20 ` Honnappa Nagarahalli
2019-05-01 21:20 ` Honnappa Nagarahalli
2019-05-01 21:32 ` Thomas Monjalon
2019-05-01 21:32 ` Thomas Monjalon
2019-05-01 3:54 ` [dpdk-dev] [PATCH v9 4/4] doc: added RCU to the release notes Honnappa Nagarahalli
2019-05-01 3:54 ` Honnappa Nagarahalli
2019-05-01 11:31 ` Mcnamara, John
2019-05-01 11:31 ` Mcnamara, John
2019-05-01 12:15 ` [dpdk-dev] [PATCH v9 0/4] lib/rcu: add RCU library supporting QSBR mechanism Neil Horman
2019-05-01 12:15 ` Neil Horman
2019-05-01 14:56 ` Honnappa Nagarahalli
2019-05-01 14:56 ` Honnappa Nagarahalli
2019-05-01 18:05 ` Neil Horman
2019-05-01 18:05 ` Neil Horman
2019-05-01 21:18 ` Honnappa Nagarahalli
2019-05-01 21:18 ` Honnappa Nagarahalli
2019-05-01 23:36 ` Thomas Monjalon
2019-05-01 23:36 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).