DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC 0/5] Lcore variables
@ 2024-02-08 18:16 Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                   ` (4 more replies)
  0 siblings, 5 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-08 18:16 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

One thing is unclear to the author is how this API relates to
potential future per-lcore dynamic allocator (e.g., a per-lcore heap).

Contrary to what the version.map edit suggests, this RFC is not meant
for a proposal for DPDK 24.03.

Mattias Rönnblom (5):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable

 app/test/meson.build                  |   1 +
 app/test/test_lcore_var.c             | 384 ++++++++++++++++++++++++++
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  80 ++++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/common/rte_random.c           |  30 +-
 lib/eal/common/rte_service.c          | 119 ++++----
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 352 +++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 lib/power/rte_power_pmd_mgmt.c        |  27 +-
 12 files changed, 925 insertions(+), 76 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-08 18:16 [RFC 0/5] Lcore variables Mattias Rönnblom
@ 2024-02-08 18:16 ` Mattias Rönnblom
  2024-02-09  8:25   ` Morten Brørup
  2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 2/5] eal: add lcore variable test suite Mattias Rönnblom
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-08 18:16 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small chunks of often-used data, which is related logically, but where
there are performance benefits to reap from having updates being local
to an lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  80 ++++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 352 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 7 files changed, 440 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/config/rte_config.h b/config/rte_config.h
index da265d7dd2..884482e473 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -30,6 +30,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..bb06bb7ca1 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -98,6 +98,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..5276fe7192
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+/* XXX: should this file be called eal_common_ldata.c or rte_ldata.c? */
+
+#include <inttypes.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define WARN_THRESHOLD 75
+#define MAX_AUTO_ALIGNMENT 16U
+
+/*
+ * Avoid using offset zero, since it would result in a NULL-value
+ * "handle" (offset) pointer, which in principle and per the API
+ * definition shouldn't be an issue, but may confuse some tools and
+ * users.
+ */
+#define INITIAL_OFFSET MAX_AUTO_ALIGNMENT
+
+char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
+
+static uintptr_t allocated = INITIAL_OFFSET;
+
+static void
+verify_allocation(uintptr_t new_allocated)
+{
+	static bool has_warned;
+
+	RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
+
+	if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
+	    !has_warned) {
+		EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
+			"of the maximum capacity (%d bytes)", WARN_THRESHOLD,
+			RTE_MAX_LCORE_VAR);
+		has_warned = true;
+	}
+}
+
+static void *
+lcore_var_alloc(size_t size, size_t alignment)
+{
+	uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, alignment);
+
+	void *offset = (void *)new_allocated;
+
+	new_allocated += size;
+
+	verify_allocation(new_allocated);
+
+	allocated = new_allocated;
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, alignment);
+
+	return offset;
+}
+
+void *
+rte_lcore_var_alloc(size_t size)
+{
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+
+	/* Allocations are naturally aligned (i.e., the same alignment
+	 * as the object size, up to a maximum of 16 bytes, which
+	 * should satisify alignment requirements of any kind of
+	 * object.
+	 */
+	size_t alignment = RTE_MIN(size, MAX_AUTO_ALIGNMENT);
+
+	return lcore_var_alloc(size, alignment);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..c1854dc6a4
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,352 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. In other words,
+ * there's one copy of its value for each and every current and future
+ * lcore id-equipped thread, with the total number of copies amounting
+ * to \c RTE_MAX_LCORE.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for \c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. A handle may be passed between modules and
+ * threads just like any pointer, but its value is not the address of
+ * any particular object, but rather just an opaque identifier, stored
+ * in a typed pointer (to inform the access macro the type of values).
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
+ *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but is should
+ * generally only *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids *may* be frequently read or written by the owners without the
+ * risk of false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomics) should
+ * employed to assure there are no data races between the owning
+ * thread and any non-owner threads accessing the same lcore variable
+ * instance.
+ *
+ * The value of the lcore variable for a particular lcore id may be
+ * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
+ * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * To modify the value of an lcore variable for a particular lcore id,
+ * either access the object through the pointer retrieved by \ref
+ * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
+ * RTE_LCORE_VAR_LCORE_SET.
+ *
+ * The access macros each has a short-hand which may be used by an EAL
+ * thread or registered non-EAL thread to access the lcore variable
+ * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
+ * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
+ *
+ * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier. The
+ * *identifier* value is common across all lcore ids.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like \c int,
+ * but would more typically be a \c struct. An application may choose
+ * to define an lcore variable, which it then it goes on to never
+ * allocate.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * The sum of all lcore variables, plus any padding required, must be
+ * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
+ * violation of this maximum results in the process being terminated.
+ *
+ * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
+ * same order of magnitude in size as a thread stack.
+ *
+ * The lcore variable storage buffers are kept in the BSS section in
+ * the resulting binary, where data generally isn't mapped in until
+ * it's accessed. This means that unused portions of the lcore
+ * variable storage area will not occupy any physical memory (with a
+ * granularity of the memory page size [usually 4 kB]).
+ *
+ * Lcore variables should generally *not* be \ref __rte_cache_aligned
+ * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, all nearby data structures
+ * should almost-always be written to by a single thread (the lcore
+ * variable owner). Adding padding will increase the effective memory
+ * working set size, and potentially reducing performance.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         unsigned int lcore_id;
+ *
+ *         RTE_LCORE_VAR_ALLOC(foo_state);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH(lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * \endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * } __rte_cache_aligned;
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * \endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to \ref rte_lcore_var.h is the \ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., \ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. One effect of this is thread-local variables must
+ *     initialized in a "lazy" manner (e.g., at the point of thread
+ *     creation). Lcore variables may be accessed immediately after
+ *     having been allocated (which is usually prior any thread beyond
+ *     the main thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id, and thus
+ *     not for such "regular" threads.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stddef.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC_SZ(name, size)	\
+	name = rte_lcore_var_alloc(size)
+
+/**
+ * Allocate space for an lcore variable of the size suggested by the
+ * handler pointer type and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC(name)			\
+	RTE_LCORE_VAR_ALLOC_SZ(name, sizeof(*(name)))
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a \ref
+ * RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SZ(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SZ(name);				\
+	}
+
+/**
+ * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)		\
+	((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)				\
+	((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+/**
+ * Get value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))
+
+/**
+ * Set the value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
+
+/**
+ * Get value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
+
+/**
+ * Set value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_SET(name, value) \
+	RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ */
+#define RTE_LCORE_VAR_FOREACH(var, name)				\
+	for (unsigned int lcore_id =					\
+		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
+
+/**
+ * Allocate space in the per-lcore id buffer for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * The allocation is always successful, barring an fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * @return
+ *   The id of the variable, stored in a void pointer value.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 5e0cd47c82..e90b86115a 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -393,6 +393,10 @@ EXPERIMENTAL {
 	# added in 23.07
 	rte_memzone_max_get;
 	rte_memzone_max_set;
+
+	# added in 24.03
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC 2/5] eal: add lcore variable test suite
  2024-02-08 18:16 [RFC 0/5] Lcore variables Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-08 18:16 ` Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 3/5] random: keep PRNG state in lcore variable Mattias Rönnblom
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-08 18:16 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 384 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 385 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 6389ae83ee..93412cce51 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -101,6 +101,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..0229f90bf2
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,384 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static bool
+rand_bool(void)
+{
+	return rte_rand() & 1;
+}
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_PTR(test_int);
+
+	bool naturally_aligned = RTE_PTR_ALIGN_CEIL(ptr, sizeof(int)) == ptr;
+
+	bool equal;
+
+	if (rand_bool())
+		equal = RTE_LCORE_VAR_GET(test_int) == state->old_value;
+	else
+		equal = *(RTE_LCORE_VAR_PTR(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	if (rand_bool())
+		RTE_LCORE_VAR_SET(test_int, state->new_value);
+	else
+		*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		RTE_LCORE_VAR_LCORE_SET(lcore_id, test_int, state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		TEST_ASSERT_EQUAL(state->new_value,
+				  RTE_LCORE_VAR_LCORE_GET(lcore_id, test_int),
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH(v, test_int) {
+		printf("expected %d actual %d\n",
+		       states[lcore_id].new_value, *v);
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_PTR(test_struct);
+
+	/*
+	 * Lcore variable alignment is based on object size, not any
+	 * particular requirements on the struct's field.
+	 */
+	bool properly_aligned =
+		RTE_PTR_ALIGN_CEIL(lcore_struct, 16) == lcore_struct;
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_struct);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state
+{
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_PTR(test_array);
+
+	/*
+	 * Lcore variable alignment is based on object size, not any
+	 * particular requirements on the struct's field.
+	 */
+	bool properly_aligned =
+		RTE_PTR_ALIGN_CEIL(lcore_array, 16) == lcore_array;
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(RTE_LCORE_VAR_LCORE_GET(lcore_id, test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_array);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (RTE_MAX_LCORE_VAR / 2)
+
+static int
+test_many_lvars(void)
+{
+	void **handlers = malloc(sizeof(void *) * MANY_LVARS);
+	int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		void *handle = rte_lcore_var_alloc(1);
+
+		uint8_t *b = __RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), handle);
+
+		*b = (uint8_t)i;
+
+		handlers[i] = handle;
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_FOREACH_WORKER(lcore_id) {
+			uint8_t *b = __RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(),
+							       handlers[i]);
+			TEST_ASSERT_EQUAL((uint8_t)i, *b,
+					  "Unexpected lcore variable value.");
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC 3/5] random: keep PRNG state in lcore variable
  2024-02-08 18:16 [RFC 0/5] Lcore variables Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 2/5] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-02-08 18:16 ` Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 4/5] power: keep per-lcore " Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 5/5] service: " Mattias Rönnblom
  4 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-08 18:16 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Move keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances to keeping the
same state in to a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/common/rte_random.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 7709b8f2c6..af9fffd81b 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct rte_rand_state {
@@ -19,14 +20,12 @@ struct rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
-} __rte_cache_aligned;
+};
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state __rte_cache_aligned;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_PTR(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC 4/5] power: keep per-lcore state in lcore variable
  2024-02-08 18:16 [RFC 0/5] Lcore variables Mattias Rönnblom
                   ` (2 preceding siblings ...)
  2024-02-08 18:16 ` [RFC 3/5] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-02-08 18:16 ` Mattias Rönnblom
  2024-02-08 18:16 ` [RFC 5/5] service: " Mattias Rönnblom
  4 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-08 18:16 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/power/rte_power_pmd_mgmt.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 591fc69f36..bb20e564de 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -68,8 +69,8 @@ struct pmd_core_cfg {
 	/**< Number of queues ready to enter power optimized state */
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
-} __rte_cache_aligned;
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+};
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -772,10 +770,13 @@ RTE_INIT(rte_power_ethdev_pmgmt_init) {
 	size_t i;
 	int j;
 
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
+
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct pmd_core_cfg *lcore_cfg =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_cfgs);
+		TAILQ_INIT(&lcore_cfg->head);
 	}
 
 	/* initialize config defaults */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC 5/5] service: keep per-lcore state in lcore variable
  2024-02-08 18:16 [RFC 0/5] Lcore variables Mattias Rönnblom
                   ` (3 preceding siblings ...)
  2024-02-08 18:16 ` [RFC 4/5] power: keep per-lcore " Mattias Rönnblom
@ 2024-02-08 18:16 ` Mattias Rönnblom
  4 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-08 18:16 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/common/rte_service.c | 119 ++++++++++++++++++++---------------
 1 file changed, 68 insertions(+), 51 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d959c91459..c557e80409 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,11 +102,12 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
+	else {
+		struct core_state *cs;
+		RTE_LCORE_VAR_FOREACH(cs, lcore_states)
+			memset(cs, 0, sizeof(struct core_state));
 	}
 
 	int i;
@@ -122,7 +124,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +137,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +286,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +293,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +454,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +467,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +489,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +535,16 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs =
+		RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +552,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +573,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +590,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +642,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +694,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +712,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +737,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +761,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +785,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +815,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +824,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +849,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +860,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +868,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +876,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +885,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +901,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +948,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +977,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +989,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1028,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-08 18:16 ` [RFC 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-09  8:25   ` Morten Brørup
  2024-02-09 11:46     ` Mattias Rönnblom
  2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-09  8:25 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Stephen Hemminger

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Thursday, 8 February 2024 19.17
> 
> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small chunks of often-used data, which is related logically, but where
> there are performance benefits to reap from having updates being local
> to an lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---

This looks very promising. :-)

Here's a bunch of comments, questions and suggestions.


* Question: Performance.
What is the cost of accessing an lcore variable vs a variable in TLS?
I suppose the relative cost diminishes if the variable is a larger struct, compared to a simple uint64_t.

Some of my suggestions below might also affect performance.


* Advantage: Provides direct access to worker thread variables.
With the current alternative (thread-local storage), the main thread cannot access the TLS variables of the worker threads,
unless worker threads publish global access pointers.
Lcore variables of any lcore thread can be direcctly accessed by any thread, which simplifies code.


* Advantage: Roadmap towards hugemem.
It would be nice if the lcore variable memory was allocated in hugemem, to reduce TLB misses.
The current alternative (thread-local storage) is also not using hugement, so not a degradation.

Lcore variables are available very early at startup, so I guess the RTE memory allocator is not yet available.
Hugemem could be allocated using O/S allocation, so there is a possible road towards using hugemem.

Either way, using hugement would require one more indirection (the pointer to the allocated hugemem).
I don't know which has better performance, using hugemem or avoiding the additional pointer dereferencing.


* Suggestion: Consider adding an entry for unregistered non-EAL threads.
Please consider making room for one more entry, shared by all unregistered non-EAL threads, i.e.
making the array size RTE_MAX_LCORE + 1 and indexing by (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).

It would be convenient for the use cases where a variable shared by the unregistered non-EAL threads don't need special treatment.

Obviously, this might affect performance.
If the performance cost is not negligble, the addtional entry (and indexing branch) could be disabled at build time.


* Suggestion: Do not fix the alignment at 16 byte.
Pass an alignment parameter to rte_lcore_var_alloc() and use alignof() when calling it:

+#include <stdalign.h>
+
+#define RTE_LCORE_VAR_ALLOC(name)			\
+	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
+
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
+	name = rte_lcore_var_alloc(size, alignment)
+
+#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
+	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
+
+ +++ /cconfig/rte_config.h
+#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16


* Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but behaves differently.

> +/**
> + * Iterate over each lcore id's value for a lcore variable.
> + */
> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
> +	for (unsigned int lcore_id =					\
> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
> +	     lcore_id < RTE_MAX_LCORE;					\
> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> +

The macro name RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(i), which only iterates on running cores.
You might want to give it a name that differs more.

If it wasn't for API breakage, I would suggest renaming RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)

Small detail: "var" is a pointer, so consider renaming it to "ptr" and adding _PTR to the macro name.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-09  8:25   ` Morten Brørup
@ 2024-02-09 11:46     ` Mattias Rönnblom
  2024-02-09 13:04       ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-09 11:46 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-09 09:25, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Thursday, 8 February 2024 19.17
>>
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small chunks of often-used data, which is related logically, but where
>> there are performance benefits to reap from having updates being local
>> to an lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
> 
> This looks very promising. :-)
> 
> Here's a bunch of comments, questions and suggestions.
> 
> 
> * Question: Performance.
> What is the cost of accessing an lcore variable vs a variable in TLS?
> I suppose the relative cost diminishes if the variable is a larger struct, compared to a simple uint64_t.
> 

In case all the relevant data is available in a cache close to the core, 
both options carry quite low overhead.

Accessing a lcore variable will always require a TLS lookup, in the form 
of retrieving the lcore_id of the current thread. In that sense, there 
will likely be a number of extra instructions required to do the lcore 
variable address lookup (i.e., doing the load from rte_lcore_var table 
based on the lcore_id you just looked up, and adding the variable's offset).

A TLS lookup will incur an extra overhead of less than a clock cycle, 
compared to accessing a non-TLS static variable, in case static linking 
is used. For shared objects, TLS is much more expensive (something often 
visible in dynamically linked DPDK app flame graphs, in the form of the 
__tls_addr symbol). Then you need to add ~3 cc/access. This on a micro 
benchmark running on a x86_64 Raptor Lake P-core.

(To visialize the difference between shared object and not, one can use 
Compiler Explorer and -fPIC versus -fPIE.)

Things get more complicated if you access the same variable in the same 
section code, since then it can be left on the stack/in a register by 
the compiler, especially if LTO is used. In other words, if you do 
rte_lcore_id() several times in a row, only the first one will cost you 
anything. This happens fairly often in DPDK, with rte_lcore_id().

Finally, if you do something like

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index af9fffd81b..a65c30d27e 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state *state)
  static __rte_always_inline
  struct rte_rand_state *__rte_rand_get_state(void)
  {
-       unsigned int idx;
-
-       idx = rte_lcore_id();
-
-       if (unlikely(idx == LCORE_ID_ANY))
-               return &unregistered_rand_state;
-
-       return RTE_LCORE_VAR_PTR(rand_state);
+       return &unregistered_rand_state;
  }

  uint64_t

...and re-run the rand_perf_autotest, at least I see no difference at 
all (in a statically linked build). Both results in rte_rand() using ~11 
cc/call. What that suggests is that TLS overhead is very small, and that 
any extra instructions required by lcore variables doesn't add much, if 
anything at all, at least in this particular case.

> Some of my suggestions below might also affect performance.
> 
> 
> * Advantage: Provides direct access to worker thread variables.
> With the current alternative (thread-local storage), the main thread cannot access the TLS variables of the worker threads,
> unless worker threads publish global access pointers.
> Lcore variables of any lcore thread can be direcctly accessed by any thread, which simplifies code.
> 
> 
> * Advantage: Roadmap towards hugemem.
> It would be nice if the lcore variable memory was allocated in hugemem, to reduce TLB misses.
> The current alternative (thread-local storage) is also not using hugement, so not a degradation.
> 

I agree, but the thing is it's hard to figure out how much memory is 
required for these kind of variables, given how DPDK is built and 
linked. In an OS kernel, you can just take all the symbols, put them in 
a special section, and size that section. Such a thing can't easily be 
done with DPDK, since shared object builds are supported, plus that this 
facility should be available not only to DPDK modules, but also the 
application, so relying on linker scripts isn't really feasible (not 
probably not even feasible for DPDK itself).

In that scenario, you want to size up the per-lcore buffer to be so 
large, you don't have to worry about overruns. That will waste memory. 
If you use huge page memory, paging can't help you to avoid 
pre-allocating actual physical memory.

That said, even large (by static per-lcore data standards) buffers are 
potentially small enough not to grow the amount of memory used by a DPDK 
process too much. You need to provision for RTE_MAX_LCORE of them though.

The value of lcore variables should be small, and thus incur few TLB 
misses, so you may not gain much from huge pages. In my world, it's more 
about "fitting often-used per-lcore data into L1 or L2 CPU caches", 
rather than the easier "fitting often-used per-lcore data into a working 
set size reasonably expected to be covered by hardware TLB/caches".

> Lcore variables are available very early at startup, so I guess the RTE memory allocator is not yet available.
> Hugemem could be allocated using O/S allocation, so there is a possible road towards using hugemem.
> 

With the current design, that true. I'm not sure it's a strict 
requirement though, but it does makes things simpler.

> Either way, using hugement would require one more indirection (the pointer to the allocated hugemem).
> I don't know which has better performance, using hugemem or avoiding the additional pointer dereferencing.
> 
> 
> * Suggestion: Consider adding an entry for unregistered non-EAL threads.
> Please consider making room for one more entry, shared by all unregistered non-EAL threads, i.e.
> making the array size RTE_MAX_LCORE + 1 and indexing by (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
> 
> It would be convenient for the use cases where a variable shared by the unregistered non-EAL threads don't need special treatment.
> 

I thought about this, but it would require a conditional in the lookup 
macro, as you show. More importantly, it would make the whole 
<rte_lcore_var.h> thing less elegant and harder to understand. It's bad 
enough that "per-lcore" is actually "per-lcore id" (or the equivalent 
"per-EAL thread and unregistered EAL-thread"). Adding a "btw it's <what 
I said before> + 1" is not an improvement.

But useful? Sure.

I think you may still need other data for dealing with unregistered 
threads, for example a mutex or spin lock to deal with concurrency 
issues that arises with shared data.

There may also be cases were you are best off by simply disallowing 
unregistered threads from calling into that API.

> Obviously, this might affect performance.
> If the performance cost is not negligble, the addtional entry (and indexing branch) could be disabled at build time.
> 
> 
> * Suggestion: Do not fix the alignment at 16 byte.
> Pass an alignment parameter to rte_lcore_var_alloc() and use alignof() when calling it:
> 
> +#include <stdalign.h>
> +
> +#define RTE_LCORE_VAR_ALLOC(name)			\
> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
> +
> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
> +	name = rte_lcore_var_alloc(size, alignment)
> +
> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
> +
> + +++ /cconfig/rte_config.h
> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
> 
> 

That seems like a very good idea. I'll look into it.

> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but behaves differently.
> 
>> +/**
>> + * Iterate over each lcore id's value for a lcore variable.
>> + */
>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
>> +	for (unsigned int lcore_id =					\
>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
>> +	     lcore_id < RTE_MAX_LCORE;					\
>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
>> +
> 
> The macro name RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(i), which only iterates on running cores.
> You might want to give it a name that differs more.
> 

True.

Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for confusion, 
for sure.

Being consistent with <rte_lcore.h> is not so easy, since it's not even 
consistent with itself. For example, rte_lcore_count() returns the 
number of lcores (EAL threads) *plus the number of registered non-EAL 
threads*, and RTE_LCORE_FOREACH() give a different count. :)

> If it wasn't for API breakage, I would suggest renaming RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
> 
> Small detail: "var" is a pointer, so consider renaming it to "ptr" and adding _PTR to the macro name.

The "var" name comes from how <sys/queue.h> names things. I think I had 
it as "ptr" initially. I'll change it back.

Thanks a lot Morten.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-09 11:46     ` Mattias Rönnblom
@ 2024-02-09 13:04       ` Morten Brørup
  2024-02-19  7:49         ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-09 13:04 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Friday, 9 February 2024 12.46
> 
> On 2024-02-09 09:25, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Thursday, 8 February 2024 19.17
> >>
> >> Introduce DPDK per-lcore id variables, or lcore variables for short.
> >>
> >> An lcore variable has one value for every current and future lcore
> >> id-equipped thread.
> >>
> >> The primary <rte_lcore_var.h> use case is for statically allocating
> >> small chunks of often-used data, which is related logically, but
> where
> >> there are performance benefits to reap from having updates being
> local
> >> to an lcore.
> >>
> >> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> >> _Thread_local), but decoupling the values' life time with that of
> the
> >> threads.
> >>
> >> Lcore variables are also similar in terms of functionality provided
> by
> >> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >> build-time machinery. DPCPU uses linker scripts, which effectively
> >> prevents the reuse of its, otherwise seemingly viable, approach.
> >>
> >> The currently-prevailing way to solve the same problem as lcore
> >> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
> sized
> >> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >> lcore variables over this approach is that data related to the same
> >> lcore now is close (spatially, in memory), rather than data used by
> >> the same module, which in turn avoid excessive use of padding,
> >> polluting caches with unused data.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> ---
> >
> > This looks very promising. :-)
> >
> > Here's a bunch of comments, questions and suggestions.
> >
> >
> > * Question: Performance.
> > What is the cost of accessing an lcore variable vs a variable in TLS?
> > I suppose the relative cost diminishes if the variable is a larger
> struct, compared to a simple uint64_t.
> >
> 
> In case all the relevant data is available in a cache close to the
> core,
> both options carry quite low overhead.
> 
> Accessing a lcore variable will always require a TLS lookup, in the
> form
> of retrieving the lcore_id of the current thread. In that sense, there
> will likely be a number of extra instructions required to do the lcore
> variable address lookup (i.e., doing the load from rte_lcore_var table
> based on the lcore_id you just looked up, and adding the variable's
> offset).
> 
> A TLS lookup will incur an extra overhead of less than a clock cycle,
> compared to accessing a non-TLS static variable, in case static linking
> is used. For shared objects, TLS is much more expensive (something
> often
> visible in dynamically linked DPDK app flame graphs, in the form of the
> __tls_addr symbol). Then you need to add ~3 cc/access. This on a micro
> benchmark running on a x86_64 Raptor Lake P-core.
> 
> (To visialize the difference between shared object and not, one can use
> Compiler Explorer and -fPIC versus -fPIE.)
> 
> Things get more complicated if you access the same variable in the same
> section code, since then it can be left on the stack/in a register by
> the compiler, especially if LTO is used. In other words, if you do
> rte_lcore_id() several times in a row, only the first one will cost you
> anything. This happens fairly often in DPDK, with rte_lcore_id().
> 
> Finally, if you do something like
> 
> diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
> index af9fffd81b..a65c30d27e 100644
> --- a/lib/eal/common/rte_random.c
> +++ b/lib/eal/common/rte_random.c
> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state *state)
>   static __rte_always_inline
>   struct rte_rand_state *__rte_rand_get_state(void)
>   {
> -       unsigned int idx;
> -
> -       idx = rte_lcore_id();
> -
> -       if (unlikely(idx == LCORE_ID_ANY))
> -               return &unregistered_rand_state;
> -
> -       return RTE_LCORE_VAR_PTR(rand_state);
> +       return &unregistered_rand_state;
>   }
> 
>   uint64_t
> 
> ...and re-run the rand_perf_autotest, at least I see no difference at
> all (in a statically linked build). Both results in rte_rand() using
> ~11
> cc/call. What that suggests is that TLS overhead is very small, and
> that
> any extra instructions required by lcore variables doesn't add much, if
> anything at all, at least in this particular case.

Excellent. Thank you for a thorough and detailed answer, Mattias.

> 
> > Some of my suggestions below might also affect performance.
> >
> >
> > * Advantage: Provides direct access to worker thread variables.
> > With the current alternative (thread-local storage), the main thread
> cannot access the TLS variables of the worker threads,
> > unless worker threads publish global access pointers.
> > Lcore variables of any lcore thread can be direcctly accessed by any
> thread, which simplifies code.
> >
> >
> > * Advantage: Roadmap towards hugemem.
> > It would be nice if the lcore variable memory was allocated in
> hugemem, to reduce TLB misses.
> > The current alternative (thread-local storage) is also not using
> hugement, so not a degradation.
> >
> 
> I agree, but the thing is it's hard to figure out how much memory is
> required for these kind of variables, given how DPDK is built and
> linked. In an OS kernel, you can just take all the symbols, put them in
> a special section, and size that section. Such a thing can't easily be
> done with DPDK, since shared object builds are supported, plus that
> this
> facility should be available not only to DPDK modules, but also the
> application, so relying on linker scripts isn't really feasible (not
> probably not even feasible for DPDK itself).
> 
> In that scenario, you want to size up the per-lcore buffer to be so
> large, you don't have to worry about overruns. That will waste memory.
> If you use huge page memory, paging can't help you to avoid
> pre-allocating actual physical memory.

Good point.
I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE), but I hadn't considered how paging helps us use less physical memory than that.

> 
> That said, even large (by static per-lcore data standards) buffers are
> potentially small enough not to grow the amount of memory used by a
> DPDK
> process too much. You need to provision for RTE_MAX_LCORE of them
> though.
> 
> The value of lcore variables should be small, and thus incur few TLB
> misses, so you may not gain much from huge pages. In my world, it's
> more
> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
> rather than the easier "fitting often-used per-lcore data into a
> working
> set size reasonably expected to be covered by hardware TLB/caches".

Yes, I suppose that lcore variables are intended to be small, and large per-lcore structures should keep following the current design patterns for allocation and access.

Perhaps this guideline is worth mentioning in the documentation.

> 
> > Lcore variables are available very early at startup, so I guess the
> RTE memory allocator is not yet available.
> > Hugemem could be allocated using O/S allocation, so there is a
> possible road towards using hugemem.
> >
> 
> With the current design, that true. I'm not sure it's a strict
> requirement though, but it does makes things simpler.
> 
> > Either way, using hugement would require one more indirection (the
> pointer to the allocated hugemem).
> > I don't know which has better performance, using hugemem or avoiding
> the additional pointer dereferencing.
> >
> >
> > * Suggestion: Consider adding an entry for unregistered non-EAL
> threads.
> > Please consider making room for one more entry, shared by all
> unregistered non-EAL threads, i.e.
> > making the array size RTE_MAX_LCORE + 1 and indexing by
> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
> >
> > It would be convenient for the use cases where a variable shared by
> the unregistered non-EAL threads don't need special treatment.
> >
> 
> I thought about this, but it would require a conditional in the lookup
> macro, as you show. More importantly, it would make the whole
> <rte_lcore_var.h> thing less elegant and harder to understand. It's bad
> enough that "per-lcore" is actually "per-lcore id" (or the equivalent
> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's <what
> I said before> + 1" is not an improvement.

We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE when _tread_id is set to -1:

+++ eal_common_thread.c:
  RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
+ RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;

and

+++ rte_lcore.h:
static inline unsigned
rte_lcore_id(void)
{
	return RTE_PER_LCORE(_lcore_id);
}
+ static inline unsigned
+ rte_lcore_idx(void)
+ {
+ 	return RTE_PER_LCORE(_lcore_idx);
+ }

That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.

> 
> But useful? Sure.
> 
> I think you may still need other data for dealing with unregistered
> threads, for example a mutex or spin lock to deal with concurrency
> issues that arises with shared data.

Adding the extra entry is only for the benefit of use cases where special handling is not required. It will make the code for those use cases much cleaner. I think it is useful.

Use cases requiring special handling should still do the special handling they do today.

> 
> There may also be cases were you are best off by simply disallowing
> unregistered threads from calling into that API.
> 
> > Obviously, this might affect performance.
> > If the performance cost is not negligble, the addtional entry (and
> indexing branch) could be disabled at build time.
> >
> >
> > * Suggestion: Do not fix the alignment at 16 byte.
> > Pass an alignment parameter to rte_lcore_var_alloc() and use
> alignof() when calling it:
> >
> > +#include <stdalign.h>
> > +
> > +#define RTE_LCORE_VAR_ALLOC(name)			\
> > +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
> > +
> > +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
> > +	name = rte_lcore_var_alloc(size, alignment)
> > +
> > +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
> > +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
> > +
> > + +++ /cconfig/rte_config.h
> > +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
> >
> >
> 
> That seems like a very good idea. I'll look into it.
> 
> > * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but
> behaves differently.
> >
> >> +/**
> >> + * Iterate over each lcore id's value for a lcore variable.
> >> + */
> >> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
> >> +	for (unsigned int lcore_id =					\
> >> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
> >> +	     lcore_id < RTE_MAX_LCORE;					\
> >> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> >> +
> >
> > The macro name RTE_LCORE_VAR_FOREACH() resembles
> RTE_LCORE_FOREACH(i), which only iterates on running cores.
> > You might want to give it a name that differs more.
> >
> 
> True.
> 
> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
> confusion,
> for sure.
> 
> Being consistent with <rte_lcore.h> is not so easy, since it's not even
> consistent with itself. For example, rte_lcore_count() returns the
> number of lcores (EAL threads) *plus the number of registered non-EAL
> threads*, and RTE_LCORE_FOREACH() give a different count. :)

Naming is hard. I don't have a good name, and can only offer inspiration...

<rte_lcore.h> has RTE_LCORE_FOREACH() and its RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.

Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a variant.

> 
> > If it wasn't for API breakage, I would suggest renaming
> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
> >
> > Small detail: "var" is a pointer, so consider renaming it to "ptr"
> and adding _PTR to the macro name.
> 
> The "var" name comes from how <sys/queue.h> names things. I think I had
> it as "ptr" initially. I'll change it back.

Thanks.

> 
> Thanks a lot Morten.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-09 13:04       ` Morten Brørup
@ 2024-02-19  7:49         ` Mattias Rönnblom
  2024-02-19 11:10           ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19  7:49 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-09 14:04, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>> Sent: Friday, 9 February 2024 12.46
>>
>> On 2024-02-09 09:25, Morten Brørup wrote:
>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>>>> Sent: Thursday, 8 February 2024 19.17
>>>>
>>>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>>>
>>>> An lcore variable has one value for every current and future lcore
>>>> id-equipped thread.
>>>>
>>>> The primary <rte_lcore_var.h> use case is for statically allocating
>>>> small chunks of often-used data, which is related logically, but
>> where
>>>> there are performance benefits to reap from having updates being
>> local
>>>> to an lcore.
>>>>
>>>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>>>> _Thread_local), but decoupling the values' life time with that of
>> the
>>>> threads.
>>>>
>>>> Lcore variables are also similar in terms of functionality provided
>> by
>>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>>>> build-time machinery. DPCPU uses linker scripts, which effectively
>>>> prevents the reuse of its, otherwise seemingly viable, approach.
>>>>
>>>> The currently-prevailing way to solve the same problem as lcore
>>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
>> sized
>>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>>>> lcore variables over this approach is that data related to the same
>>>> lcore now is close (spatially, in memory), rather than data used by
>>>> the same module, which in turn avoid excessive use of padding,
>>>> polluting caches with unused data.
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> ---
>>>
>>> This looks very promising. :-)
>>>
>>> Here's a bunch of comments, questions and suggestions.
>>>
>>>
>>> * Question: Performance.
>>> What is the cost of accessing an lcore variable vs a variable in TLS?
>>> I suppose the relative cost diminishes if the variable is a larger
>> struct, compared to a simple uint64_t.
>>>
>>
>> In case all the relevant data is available in a cache close to the
>> core,
>> both options carry quite low overhead.
>>
>> Accessing a lcore variable will always require a TLS lookup, in the
>> form
>> of retrieving the lcore_id of the current thread. In that sense, there
>> will likely be a number of extra instructions required to do the lcore
>> variable address lookup (i.e., doing the load from rte_lcore_var table
>> based on the lcore_id you just looked up, and adding the variable's
>> offset).
>>
>> A TLS lookup will incur an extra overhead of less than a clock cycle,
>> compared to accessing a non-TLS static variable, in case static linking
>> is used. For shared objects, TLS is much more expensive (something
>> often
>> visible in dynamically linked DPDK app flame graphs, in the form of the
>> __tls_addr symbol). Then you need to add ~3 cc/access. This on a micro
>> benchmark running on a x86_64 Raptor Lake P-core.
>>
>> (To visialize the difference between shared object and not, one can use
>> Compiler Explorer and -fPIC versus -fPIE.)
>>
>> Things get more complicated if you access the same variable in the same
>> section code, since then it can be left on the stack/in a register by
>> the compiler, especially if LTO is used. In other words, if you do
>> rte_lcore_id() several times in a row, only the first one will cost you
>> anything. This happens fairly often in DPDK, with rte_lcore_id().
>>
>> Finally, if you do something like
>>
>> diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
>> index af9fffd81b..a65c30d27e 100644
>> --- a/lib/eal/common/rte_random.c
>> +++ b/lib/eal/common/rte_random.c
>> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state *state)
>>    static __rte_always_inline
>>    struct rte_rand_state *__rte_rand_get_state(void)
>>    {
>> -       unsigned int idx;
>> -
>> -       idx = rte_lcore_id();
>> -
>> -       if (unlikely(idx == LCORE_ID_ANY))
>> -               return &unregistered_rand_state;
>> -
>> -       return RTE_LCORE_VAR_PTR(rand_state);
>> +       return &unregistered_rand_state;
>>    }
>>
>>    uint64_t
>>
>> ...and re-run the rand_perf_autotest, at least I see no difference at
>> all (in a statically linked build). Both results in rte_rand() using
>> ~11
>> cc/call. What that suggests is that TLS overhead is very small, and
>> that
>> any extra instructions required by lcore variables doesn't add much, if
>> anything at all, at least in this particular case.
> 
> Excellent. Thank you for a thorough and detailed answer, Mattias.
> 
>>
>>> Some of my suggestions below might also affect performance.
>>>
>>>
>>> * Advantage: Provides direct access to worker thread variables.
>>> With the current alternative (thread-local storage), the main thread
>> cannot access the TLS variables of the worker threads,
>>> unless worker threads publish global access pointers.
>>> Lcore variables of any lcore thread can be direcctly accessed by any
>> thread, which simplifies code.
>>>
>>>
>>> * Advantage: Roadmap towards hugemem.
>>> It would be nice if the lcore variable memory was allocated in
>> hugemem, to reduce TLB misses.
>>> The current alternative (thread-local storage) is also not using
>> hugement, so not a degradation.
>>>
>>
>> I agree, but the thing is it's hard to figure out how much memory is
>> required for these kind of variables, given how DPDK is built and
>> linked. In an OS kernel, you can just take all the symbols, put them in
>> a special section, and size that section. Such a thing can't easily be
>> done with DPDK, since shared object builds are supported, plus that
>> this
>> facility should be available not only to DPDK modules, but also the
>> application, so relying on linker scripts isn't really feasible (not
>> probably not even feasible for DPDK itself).
>>
>> In that scenario, you want to size up the per-lcore buffer to be so
>> large, you don't have to worry about overruns. That will waste memory.
>> If you use huge page memory, paging can't help you to avoid
>> pre-allocating actual physical memory.
> 
> Good point.
> I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE), but I hadn't considered how paging helps us use less physical memory than that.
> 
>>
>> That said, even large (by static per-lcore data standards) buffers are
>> potentially small enough not to grow the amount of memory used by a
>> DPDK
>> process too much. You need to provision for RTE_MAX_LCORE of them
>> though.
>>
>> The value of lcore variables should be small, and thus incur few TLB
>> misses, so you may not gain much from huge pages. In my world, it's
>> more
>> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
>> rather than the easier "fitting often-used per-lcore data into a
>> working
>> set size reasonably expected to be covered by hardware TLB/caches".
> 
> Yes, I suppose that lcore variables are intended to be small, and large per-lcore structures should keep following the current design patterns for allocation and access.
> 

It seems to me that support for per-lcore heaps should be the solution 
for supporting use cases requiring many, larger and/or dynamic objects 
on a per-lcore basis.

Ideally, you would design both that mechanism and lcore variables 
together, but then if you couple enough amount of improvements together 
you will never get anywhere. An instance of where perfect is the enemy 
of good, perhaps.

> Perhaps this guideline is worth mentioning in the documentation.
> 

What is missing, more specifically? The size limitation and the static 
nature of lcore variables is described, and what current design patterns 
they expected to (partly) replace is also covered.

>>
>>> Lcore variables are available very early at startup, so I guess the
>> RTE memory allocator is not yet available.
>>> Hugemem could be allocated using O/S allocation, so there is a
>> possible road towards using hugemem.
>>>
>>
>> With the current design, that true. I'm not sure it's a strict
>> requirement though, but it does makes things simpler.
>>
>>> Either way, using hugement would require one more indirection (the
>> pointer to the allocated hugemem).
>>> I don't know which has better performance, using hugemem or avoiding
>> the additional pointer dereferencing.
>>>
>>>
>>> * Suggestion: Consider adding an entry for unregistered non-EAL
>> threads.
>>> Please consider making room for one more entry, shared by all
>> unregistered non-EAL threads, i.e.
>>> making the array size RTE_MAX_LCORE + 1 and indexing by
>> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
>>>
>>> It would be convenient for the use cases where a variable shared by
>> the unregistered non-EAL threads don't need special treatment.
>>>
>>
>> I thought about this, but it would require a conditional in the lookup
>> macro, as you show. More importantly, it would make the whole
>> <rte_lcore_var.h> thing less elegant and harder to understand. It's bad
>> enough that "per-lcore" is actually "per-lcore id" (or the equivalent
>> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's <what
>> I said before> + 1" is not an improvement.
> 
> We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE when _tread_id is set to -1:
> 
> +++ eal_common_thread.c:
>    RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
> + RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;
> 
> and
> 
> +++ rte_lcore.h:
> static inline unsigned
> rte_lcore_id(void)
> {
> 	return RTE_PER_LCORE(_lcore_id);
> }
> + static inline unsigned
> + rte_lcore_idx(void)
> + {
> + 	return RTE_PER_LCORE(_lcore_idx);
> + }
> 
> That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.
> 

Wouldn't that effectively give a shared lcore id to all unregistered 
threads?

We definitely shouldn't further complicate anything related to the DPDK 
threading model, in my opinion.

If a module needs one or more variable instances that aren't per lcore, 
use regular static allocation instead. I would favor clarity over 
convenience here, at least until we know better (see below as well).

>>
>> But useful? Sure.
>>
>> I think you may still need other data for dealing with unregistered
>> threads, for example a mutex or spin lock to deal with concurrency
>> issues that arises with shared data.
> 
> Adding the extra entry is only for the benefit of use cases where special handling is not required. It will make the code for those use cases much cleaner. I think it is useful.
> 

It will make it shorter, but not less clean, I would argue.

> Use cases requiring special handling should still do the special handling they do today.
> 

For DPDK modules using lcore variables and which treat unregistered 
threads as "full citizens", I expect special handling of unregistered 
threads to be the norm. Take rte_random.h as an example. Current API 
does not guarantee MT safety for concurrent calls of unregistered 
threads. It probably should, and it should probably be by means of a 
mutex (not spinlock).

The reason I'm not running off to make a rte_random.c patch is that's 
it's unclear to me what is the role of unregistered threads in DPDK. I'm 
reasonably comfortable with a model where there are many threads that 
basically don't interact with the DPDK APIs (except maybe some very 
narrow exposure, like the preemption-safe ring variant). One example of 
such a design would be big slow control plane which uses multi-threading 
and the Linux process scheduler for work scheduling, hosted in the same 
process as a DPDK data plane app.

What I find more strange is a scenario where there are unregistered 
threads which interacts with a wide variety of DPDK APIs, does so 
at-high-rates/with-high-performance-requirements and are expected to be 
preemption-safe. So they are basically EAL threads without a lcore id.

Support for that latter scenario has also been voiced, in previous 
discussions, from what I recall.

I think it's hard to answer the question of a "unregistered thread 
spare" for lcore variables without first knowing what the future should 
look like for unregistered threads in DPDK, in terms of being able to 
call into DPDK APIs, preemption-safety guarantees, etc.

It seems that until you have a clearer picture of how generally to treat 
unregistered threads, you are best off with just a per-lcore id instance 
of lcore variables.

>>
>> There may also be cases were you are best off by simply disallowing
>> unregistered threads from calling into that API.
>>
>>> Obviously, this might affect performance.
>>> If the performance cost is not negligble, the addtional entry (and
>> indexing branch) could be disabled at build time.
>>>
>>>
>>> * Suggestion: Do not fix the alignment at 16 byte.
>>> Pass an alignment parameter to rte_lcore_var_alloc() and use
>> alignof() when calling it:
>>>
>>> +#include <stdalign.h>
>>> +
>>> +#define RTE_LCORE_VAR_ALLOC(name)			\
>>> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
>>> +
>>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
>>> +	name = rte_lcore_var_alloc(size, alignment)
>>> +
>>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
>>> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
>>> +
>>> + +++ /cconfig/rte_config.h
>>> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
>>>
>>>
>>
>> That seems like a very good idea. I'll look into it.
>>
>>> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but
>> behaves differently.
>>>
>>>> +/**
>>>> + * Iterate over each lcore id's value for a lcore variable.
>>>> + */
>>>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
>>>> +	for (unsigned int lcore_id =					\
>>>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
>>>> +	     lcore_id < RTE_MAX_LCORE;					\
>>>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
>>>> +
>>>
>>> The macro name RTE_LCORE_VAR_FOREACH() resembles
>> RTE_LCORE_FOREACH(i), which only iterates on running cores.
>>> You might want to give it a name that differs more.
>>>
>>
>> True.
>>
>> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
>> confusion,
>> for sure.
>>
>> Being consistent with <rte_lcore.h> is not so easy, since it's not even
>> consistent with itself. For example, rte_lcore_count() returns the
>> number of lcores (EAL threads) *plus the number of registered non-EAL
>> threads*, and RTE_LCORE_FOREACH() give a different count. :)
> 
> Naming is hard. I don't have a good name, and can only offer inspiration...
> 
> <rte_lcore.h> has RTE_LCORE_FOREACH() and its RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.
> 
> Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a variant.
> 
>>
>>> If it wasn't for API breakage, I would suggest renaming
>> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
>>>
>>> Small detail: "var" is a pointer, so consider renaming it to "ptr"
>> and adding _PTR to the macro name.
>>
>> The "var" name comes from how <sys/queue.h> names things. I think I had
>> it as "ptr" initially. I'll change it back.
> 
> Thanks.
> 
>>
>> Thanks a lot Morten.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v2 0/5] Lcore variables
  2024-02-08 18:16 ` [RFC 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-09  8:25   ` Morten Brørup
@ 2024-02-19  9:40   ` Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                       ` (4 more replies)
  1 sibling, 5 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19  9:40 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

One thing is unclear to the author is how this API relates to
potential future per-lcore dynamic allocator (e.g., a per-lcore heap).

Contrary to what the version.map edit suggests, this RFC is not meant
for a proposal for DPDK 24.03.

Mattias Rönnblom (5):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable

 app/test/meson.build                  |   1 +
 app/test/test_lcore_var.c             | 408 ++++++++++++++++++++++++++
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  82 ++++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/common/rte_random.c           |  30 +-
 lib/eal/common/rte_service.c          | 119 ++++----
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 374 +++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 lib/power/rte_power_pmd_mgmt.c        |  27 +-
 12 files changed, 973 insertions(+), 76 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v2 1/5] eal: add static per-lcore memory allocation facility
  2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
@ 2024-02-19  9:40     ` Mattias Rönnblom
  2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 2/5] eal: add lcore variable test suite Mattias Rönnblom
                       ` (3 subsequent siblings)
  4 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19  9:40 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small chunks of often-used data, which is related logically, but where
there are performance benefits to reap from having updates being local
to an lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  82 ++++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 374 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 7 files changed, 464 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/config/rte_config.h b/config/rte_config.h
index da265d7dd2..884482e473 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -30,6 +30,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..bb06bb7ca1 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -98,6 +98,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..dfd11cbd0b
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define WARN_THRESHOLD 75
+
+/*
+ * Avoid using offset zero, since it would result in a NULL-value
+ * "handle" (offset) pointer, which in principle and per the API
+ * definition shouldn't be an issue, but may confuse some tools and
+ * users.
+ */
+#define INITIAL_OFFSET 1
+
+char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
+
+static uintptr_t allocated = INITIAL_OFFSET;
+
+static void
+verify_allocation(uintptr_t new_allocated)
+{
+	static bool has_warned;
+
+	RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
+
+	if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
+	    !has_warned) {
+		EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
+			"of the maximum capacity (%d bytes)", WARN_THRESHOLD,
+			RTE_MAX_LCORE_VAR);
+		has_warned = true;
+	}
+}
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, align);
+
+	void *offset = (void *)new_allocated;
+
+	new_allocated += size;
+
+	verify_allocation(new_allocated);
+
+	allocated = new_allocated;
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return offset;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to aligned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..4434fc21ef
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,374 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. In other words,
+ * there's one copy of its value for each and every current and future
+ * lcore id-equipped thread, with the total number of copies amounting
+ * to \c RTE_MAX_LCORE.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for \c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. A handle may be passed between modules and
+ * threads just like any pointer, but its value is not the address of
+ * any particular object, but rather just an opaque identifier, stored
+ * in a typed pointer (to inform the access macro the type of values).
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
+ *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but is should
+ * generally only *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids *may* be frequently read or written by the owners without the
+ * risk of false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomics) should
+ * employed to assure there are no data races between the owning
+ * thread and any non-owner threads accessing the same lcore variable
+ * instance.
+ *
+ * The value of the lcore variable for a particular lcore id may be
+ * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
+ * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * To modify the value of an lcore variable for a particular lcore id,
+ * either access the object through the pointer retrieved by \ref
+ * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
+ * RTE_LCORE_VAR_LCORE_SET.
+ *
+ * The access macros each has a short-hand which may be used by an EAL
+ * thread or registered non-EAL thread to access the lcore variable
+ * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
+ * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
+ *
+ * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier. The
+ * *identifier* value is common across all lcore ids.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like \c int,
+ * but would more typically be a \c struct. An application may choose
+ * to define an lcore variable, which it then it goes on to never
+ * allocate.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * The sum of all lcore variables, plus any padding required, must be
+ * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
+ * violation of this maximum results in the process being terminated.
+ *
+ * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
+ * same order of magnitude in size as a thread stack.
+ *
+ * The lcore variable storage buffers are kept in the BSS section in
+ * the resulting binary, where data generally isn't mapped in until
+ * it's accessed. This means that unused portions of the lcore
+ * variable storage area will not occupy any physical memory (with a
+ * granularity of the memory page size [usually 4 kB]).
+ *
+ * Lcore variables should generally *not* be \ref __rte_cache_aligned
+ * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, all nearby data structures
+ * should almost-always be written to by a single thread (the lcore
+ * variable owner). Adding padding will increase the effective memory
+ * working set size, and potentially reducing performance.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         unsigned int lcore_id;
+ *
+ *         RTE_LCORE_VAR_ALLOC(foo_state);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH(lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * \endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * } __rte_cache_aligned;
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * \endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to \ref rte_lcore_var.h is the \ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., \ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. One effect of this is thread-local variables must
+ *     initialized in a "lazy" manner (e.g., at the point of thread
+ *     creation). Lcore variables may be accessed immediately after
+ *     having been allocated (which is usually prior any thread beyond
+ *     the main thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id, and thus
+ *     not for such "regular" threads.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align)	\
+	name = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
+	name = rte_lcore_var_alloc(size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handler pointer type, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC(name)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, sizeof(*(name)), alignof(*(name)))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a \ref
+ * RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)		\
+	((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)				\
+	((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+/**
+ * Get value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))
+
+/**
+ * Set the value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
+
+/**
+ * Get value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
+
+/**
+ * Set value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_SET(name, value) \
+	RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(var, name)				\
+	for (unsigned int lcore_id =					\
+		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
+
+/**
+ * Allocate space in the per-lcore id buffers for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than \c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The id of the variable, stored in a void pointer value.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 5e0cd47c82..e90b86115a 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -393,6 +393,10 @@ EXPERIMENTAL {
 	# added in 23.07
 	rte_memzone_max_get;
 	rte_memzone_max_set;
+
+	# added in 24.03
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v2 2/5] eal: add lcore variable test suite
  2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-19  9:40     ` Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 3/5] random: keep PRNG state in lcore variable Mattias Rönnblom
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19  9:40 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

RFC v2:
 * Improve alignment-related test coverage.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 408 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 409 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 6389ae83ee..93412cce51 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -101,6 +101,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..310d32e10d
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,408 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static bool
+rand_bool(void)
+{
+	return rte_rand() & 1;
+}
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_PTR(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal;
+
+	if (rand_bool())
+		equal = RTE_LCORE_VAR_GET(test_int) == state->old_value;
+	else
+		equal = *(RTE_LCORE_VAR_PTR(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	if (rand_bool())
+		RTE_LCORE_VAR_SET(test_int, state->new_value);
+	else
+		*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		RTE_LCORE_VAR_LCORE_SET(lcore_id, test_int, state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		TEST_ASSERT_EQUAL(state->new_value,
+				  RTE_LCORE_VAR_LCORE_GET(lcore_id, test_int),
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_PTR(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_struct);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state
+{
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_PTR(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(RTE_LCORE_VAR_LCORE_GET(lcore_id, test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_array);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (RTE_MAX_LCORE_VAR / 2)
+
+static int
+test_many_lvars(void)
+{
+	void **handlers = malloc(sizeof(void *) * MANY_LVARS);
+	int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		void *handle = rte_lcore_var_alloc(1, 1);
+
+		uint8_t *b = __RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), handle);
+
+		*b = (uint8_t)i;
+
+		handlers[i] = handle;
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_FOREACH_WORKER(lcore_id) {
+			uint8_t *b = __RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(),
+							       handlers[i]);
+			TEST_ASSERT_EQUAL((uint8_t)i, *b,
+					  "Unexpected lcore variable value.");
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v2 3/5] random: keep PRNG state in lcore variable
  2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 2/5] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-02-19  9:40     ` Mattias Rönnblom
  2024-02-19 11:22       ` Morten Brørup
  2024-02-19  9:40     ` [RFC v2 4/5] power: keep per-lcore " Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 5/5] service: " Mattias Rönnblom
  4 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19  9:40 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/common/rte_random.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 7709b8f2c6..af9fffd81b 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct rte_rand_state {
@@ -19,14 +20,12 @@ struct rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
-} __rte_cache_aligned;
+};
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state __rte_cache_aligned;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_PTR(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v2 4/5] power: keep per-lcore state in lcore variable
  2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
                       ` (2 preceding siblings ...)
  2024-02-19  9:40     ` [RFC v2 3/5] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-02-19  9:40     ` Mattias Rönnblom
  2024-02-19  9:40     ` [RFC v2 5/5] service: " Mattias Rönnblom
  4 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19  9:40 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/power/rte_power_pmd_mgmt.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 591fc69f36..bb20e564de 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -68,8 +69,8 @@ struct pmd_core_cfg {
 	/**< Number of queues ready to enter power optimized state */
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
-} __rte_cache_aligned;
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+};
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -772,10 +770,13 @@ RTE_INIT(rte_power_ethdev_pmgmt_init) {
 	size_t i;
 	int j;
 
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
+
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		struct pmd_core_cfg *lcore_cfg =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_cfgs);
+		TAILQ_INIT(&lcore_cfg->head);
 	}
 
 	/* initialize config defaults */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v2 5/5] service: keep per-lcore state in lcore variable
  2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
                       ` (3 preceding siblings ...)
  2024-02-19  9:40     ` [RFC v2 4/5] power: keep per-lcore " Mattias Rönnblom
@ 2024-02-19  9:40     ` Mattias Rönnblom
  4 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19  9:40 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/common/rte_service.c | 119 ++++++++++++++++++++---------------
 1 file changed, 68 insertions(+), 51 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d959c91459..de205c5da5 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,11 +102,12 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
+	else {
+		struct core_state *cs;
+		RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+			memset(cs, 0, sizeof(struct core_state));
 	}
 
 	int i;
@@ -122,7 +124,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +137,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +286,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +293,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +454,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +467,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +489,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +535,16 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs =
+		RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +552,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +573,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +590,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +642,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +694,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +712,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +737,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +761,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +785,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +815,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +824,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +849,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +860,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +868,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +876,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +885,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +901,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +948,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +977,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +989,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1028,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-19  7:49         ` Mattias Rönnblom
@ 2024-02-19 11:10           ` Morten Brørup
  2024-02-19 14:31             ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-19 11:10 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Monday, 19 February 2024 08.49
> 
> On 2024-02-09 14:04, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >> Sent: Friday, 9 February 2024 12.46
> >>
> >> On 2024-02-09 09:25, Morten Brørup wrote:
> >>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >>>> Sent: Thursday, 8 February 2024 19.17
> >>>>
> >>>> Introduce DPDK per-lcore id variables, or lcore variables for
> short.
> >>>>
> >>>> An lcore variable has one value for every current and future lcore
> >>>> id-equipped thread.
> >>>>
> >>>> The primary <rte_lcore_var.h> use case is for statically
> allocating
> >>>> small chunks of often-used data, which is related logically, but
> >> where
> >>>> there are performance benefits to reap from having updates being
> >> local
> >>>> to an lcore.
> >>>>
> >>>> Lcore variables are similar to thread-local storage (TLS, e.g.,
> C11
> >>>> _Thread_local), but decoupling the values' life time with that of
> >> the
> >>>> threads.
> >>>>
> >>>> Lcore variables are also similar in terms of functionality
> provided
> >> by
> >>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >>>> build-time machinery. DPCPU uses linker scripts, which effectively
> >>>> prevents the reuse of its, otherwise seemingly viable, approach.
> >>>>
> >>>> The currently-prevailing way to solve the same problem as lcore
> >>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
> >> sized
> >>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >>>> lcore variables over this approach is that data related to the
> same
> >>>> lcore now is close (spatially, in memory), rather than data used
> by
> >>>> the same module, which in turn avoid excessive use of padding,
> >>>> polluting caches with unused data.
> >>>>
> >>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>> ---
> >>>
> >>> This looks very promising. :-)
> >>>
> >>> Here's a bunch of comments, questions and suggestions.
> >>>
> >>>
> >>> * Question: Performance.
> >>> What is the cost of accessing an lcore variable vs a variable in
> TLS?
> >>> I suppose the relative cost diminishes if the variable is a larger
> >> struct, compared to a simple uint64_t.
> >>>
> >>
> >> In case all the relevant data is available in a cache close to the
> >> core,
> >> both options carry quite low overhead.
> >>
> >> Accessing a lcore variable will always require a TLS lookup, in the
> >> form
> >> of retrieving the lcore_id of the current thread. In that sense,
> there
> >> will likely be a number of extra instructions required to do the
> lcore
> >> variable address lookup (i.e., doing the load from rte_lcore_var
> table
> >> based on the lcore_id you just looked up, and adding the variable's
> >> offset).
> >>
> >> A TLS lookup will incur an extra overhead of less than a clock
> cycle,
> >> compared to accessing a non-TLS static variable, in case static
> linking
> >> is used. For shared objects, TLS is much more expensive (something
> >> often
> >> visible in dynamically linked DPDK app flame graphs, in the form of
> the
> >> __tls_addr symbol). Then you need to add ~3 cc/access. This on a
> micro
> >> benchmark running on a x86_64 Raptor Lake P-core.
> >>
> >> (To visialize the difference between shared object and not, one can
> use
> >> Compiler Explorer and -fPIC versus -fPIE.)
> >>
> >> Things get more complicated if you access the same variable in the
> same
> >> section code, since then it can be left on the stack/in a register
> by
> >> the compiler, especially if LTO is used. In other words, if you do
> >> rte_lcore_id() several times in a row, only the first one will cost
> you
> >> anything. This happens fairly often in DPDK, with rte_lcore_id().
> >>
> >> Finally, if you do something like
> >>
> >> diff --git a/lib/eal/common/rte_random.c
> b/lib/eal/common/rte_random.c
> >> index af9fffd81b..a65c30d27e 100644
> >> --- a/lib/eal/common/rte_random.c
> >> +++ b/lib/eal/common/rte_random.c
> >> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state
> *state)
> >>    static __rte_always_inline
> >>    struct rte_rand_state *__rte_rand_get_state(void)
> >>    {
> >> -       unsigned int idx;
> >> -
> >> -       idx = rte_lcore_id();
> >> -
> >> -       if (unlikely(idx == LCORE_ID_ANY))
> >> -               return &unregistered_rand_state;
> >> -
> >> -       return RTE_LCORE_VAR_PTR(rand_state);
> >> +       return &unregistered_rand_state;
> >>    }
> >>
> >>    uint64_t
> >>
> >> ...and re-run the rand_perf_autotest, at least I see no difference
> at
> >> all (in a statically linked build). Both results in rte_rand() using
> >> ~11
> >> cc/call. What that suggests is that TLS overhead is very small, and
> >> that
> >> any extra instructions required by lcore variables doesn't add much,
> if
> >> anything at all, at least in this particular case.
> >
> > Excellent. Thank you for a thorough and detailed answer, Mattias.
> >
> >>
> >>> Some of my suggestions below might also affect performance.
> >>>
> >>>
> >>> * Advantage: Provides direct access to worker thread variables.
> >>> With the current alternative (thread-local storage), the main
> thread
> >> cannot access the TLS variables of the worker threads,
> >>> unless worker threads publish global access pointers.
> >>> Lcore variables of any lcore thread can be direcctly accessed by
> any
> >> thread, which simplifies code.
> >>>
> >>>
> >>> * Advantage: Roadmap towards hugemem.
> >>> It would be nice if the lcore variable memory was allocated in
> >> hugemem, to reduce TLB misses.
> >>> The current alternative (thread-local storage) is also not using
> >> hugement, so not a degradation.
> >>>
> >>
> >> I agree, but the thing is it's hard to figure out how much memory is
> >> required for these kind of variables, given how DPDK is built and
> >> linked. In an OS kernel, you can just take all the symbols, put them
> in
> >> a special section, and size that section. Such a thing can't easily
> be
> >> done with DPDK, since shared object builds are supported, plus that
> >> this
> >> facility should be available not only to DPDK modules, but also the
> >> application, so relying on linker scripts isn't really feasible (not
> >> probably not even feasible for DPDK itself).
> >>
> >> In that scenario, you want to size up the per-lcore buffer to be so
> >> large, you don't have to worry about overruns. That will waste
> memory.
> >> If you use huge page memory, paging can't help you to avoid
> >> pre-allocating actual physical memory.
> >
> > Good point.
> > I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE),
> but I hadn't considered how paging helps us use less physical memory
> than that.
> >
> >>
> >> That said, even large (by static per-lcore data standards) buffers
> are
> >> potentially small enough not to grow the amount of memory used by a
> >> DPDK
> >> process too much. You need to provision for RTE_MAX_LCORE of them
> >> though.
> >>
> >> The value of lcore variables should be small, and thus incur few TLB
> >> misses, so you may not gain much from huge pages. In my world, it's
> >> more
> >> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
> >> rather than the easier "fitting often-used per-lcore data into a
> >> working
> >> set size reasonably expected to be covered by hardware TLB/caches".
> >
> > Yes, I suppose that lcore variables are intended to be small, and
> large per-lcore structures should keep following the current design
> patterns for allocation and access.
> >
> 
> It seems to me that support for per-lcore heaps should be the solution
> for supporting use cases requiring many, larger and/or dynamic objects
> on a per-lcore basis.
> 
> Ideally, you would design both that mechanism and lcore variables
> together, but then if you couple enough amount of improvements together
> you will never get anywhere. An instance of where perfect is the enemy
> of good, perhaps.

So true. :-)

> 
> > Perhaps this guideline is worth mentioning in the documentation.
> >
> 
> What is missing, more specifically? The size limitation and the static
> nature of lcore variables is described, and what current design
> patterns
> they expected to (partly) replace is also covered.

Your documentation is fine, and nothing specific is missing here.
I was thinking out loud that the high level DPDK documentation should describe common design patterns.

> 
> >>
> >>> Lcore variables are available very early at startup, so I guess the
> >> RTE memory allocator is not yet available.
> >>> Hugemem could be allocated using O/S allocation, so there is a
> >> possible road towards using hugemem.
> >>>
> >>
> >> With the current design, that true. I'm not sure it's a strict
> >> requirement though, but it does makes things simpler.
> >>
> >>> Either way, using hugement would require one more indirection (the
> >> pointer to the allocated hugemem).
> >>> I don't know which has better performance, using hugemem or
> avoiding
> >> the additional pointer dereferencing.
> >>>
> >>>
> >>> * Suggestion: Consider adding an entry for unregistered non-EAL
> >> threads.
> >>> Please consider making room for one more entry, shared by all
> >> unregistered non-EAL threads, i.e.
> >>> making the array size RTE_MAX_LCORE + 1 and indexing by
> >> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
> >>>
> >>> It would be convenient for the use cases where a variable shared by
> >> the unregistered non-EAL threads don't need special treatment.
> >>>
> >>
> >> I thought about this, but it would require a conditional in the
> lookup
> >> macro, as you show. More importantly, it would make the whole
> >> <rte_lcore_var.h> thing less elegant and harder to understand. It's
> bad
> >> enough that "per-lcore" is actually "per-lcore id" (or the
> equivalent
> >> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's
> <what
> >> I said before> + 1" is not an improvement.
> >
> > We could promote "one more entry for unregistered non-EAL threads"
> design pattern (for relevant use cases only!) by extending EAL with one
> more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE
> when _tread_id is set to -1:
> >
> > +++ eal_common_thread.c:
> >    RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
> > + RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;

Ups... wrong reference! I meant to refer to _lcore_id, not _thread_id. Correction:

We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _lcore_id, but set to RTE_MAX_LCORE when _lcore_id is set to LCORE_ID_ANY:

+++ eal_common_thread.c:
  RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
+ RTE_DEFINE_PER_LCORE(unsigned int, _lcore_idx) = RTE_MAX_LCORE;

> >
> > and
> >
> > +++ rte_lcore.h:
> > static inline unsigned
> > rte_lcore_id(void)
> > {
> > 	return RTE_PER_LCORE(_lcore_id);
> > }
> > + static inline unsigned
> > + rte_lcore_idx(void)
> > + {
> > + 	return RTE_PER_LCORE(_lcore_idx);
> > + }
> >
> > That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ?
> rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.
> >
> 
> Wouldn't that effectively give a shared lcore id to all unregistered
> threads?

Yes, just like the rte_lcore_id() is LCORE_ID_ANY (i.e. UINT32_MAX) for all unregistered threads; but it will be usable for array indexing, behaving as a shadow variable of RTE_PER_LCORE(_lcore_id) for optimizing away the "rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE" when indexing.

> 
> We definitely shouldn't further complicate anything related to the DPDK
> threading model, in my opinion.
> 
> If a module needs one or more variable instances that aren't per lcore,
> use regular static allocation instead. I would favor clarity over
> convenience here, at least until we know better (see below as well).
> 
> >>
> >> But useful? Sure.
> >>
> >> I think you may still need other data for dealing with unregistered
> >> threads, for example a mutex or spin lock to deal with concurrency
> >> issues that arises with shared data.
> >
> > Adding the extra entry is only for the benefit of use cases where
> special handling is not required. It will make the code for those use
> cases much cleaner. I think it is useful.
> >
> 
> It will make it shorter, but not less clean, I would argue.
> 
> > Use cases requiring special handling should still do the special
> handling they do today.
> >
> 
> For DPDK modules using lcore variables and which treat unregistered
> threads as "full citizens", I expect special handling of unregistered
> threads to be the norm. Take rte_random.h as an example. Current API
> does not guarantee MT safety for concurrent calls of unregistered
> threads. It probably should, and it should probably be by means of a
> mutex (not spinlock).
> 
> The reason I'm not running off to make a rte_random.c patch is that's
> it's unclear to me what is the role of unregistered threads in DPDK.
> I'm
> reasonably comfortable with a model where there are many threads that
> basically don't interact with the DPDK APIs (except maybe some very
> narrow exposure, like the preemption-safe ring variant). One example of
> such a design would be big slow control plane which uses multi-
> threading
> and the Linux process scheduler for work scheduling, hosted in the same
> process as a DPDK data plane app.
> 
> What I find more strange is a scenario where there are unregistered
> threads which interacts with a wide variety of DPDK APIs, does so
> at-high-rates/with-high-performance-requirements and are expected to be
> preemption-safe. So they are basically EAL threads without a lcore id.

Yes, this is happening in the wild.
E.g. our application has a mode where it uses fewer EAL threads, and processes more in non-EAL threads. So to say, the same work is processed either by an EAL thread or a non-EAL thread, depending on the application's mode.
The extra array entry would be useful for such use cases.

> 
> Support for that latter scenario has also been voiced, in previous
> discussions, from what I recall.
> 
> I think it's hard to answer the question of a "unregistered thread
> spare" for lcore variables without first knowing what the future should
> look like for unregistered threads in DPDK, in terms of being able to
> call into DPDK APIs, preemption-safety guarantees, etc.
> 
> It seems that until you have a clearer picture of how generally to
> treat
> unregistered threads, you are best off with just a per-lcore id
> instance
> of lcore variables.

I get your point. It also reduces the risk of bugs caused by incorrect use of the additional entry.

I am arguing for a different angle: Providing the extra entry will help uncovering relevant use cases.

> 
> >>
> >> There may also be cases were you are best off by simply disallowing
> >> unregistered threads from calling into that API.
> >>
> >>> Obviously, this might affect performance.
> >>> If the performance cost is not negligble, the addtional entry (and
> >> indexing branch) could be disabled at build time.
> >>>
> >>>
> >>> * Suggestion: Do not fix the alignment at 16 byte.
> >>> Pass an alignment parameter to rte_lcore_var_alloc() and use
> >> alignof() when calling it:
> >>>
> >>> +#include <stdalign.h>
> >>> +
> >>> +#define RTE_LCORE_VAR_ALLOC(name)			\
> >>> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
> >>> +
> >>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)
> 	\
> >>> +	name = rte_lcore_var_alloc(size, alignment)
> >>> +
> >>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
> >>> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
> >>> +
> >>> + +++ /cconfig/rte_config.h
> >>> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
> >>>
> >>>
> >>
> >> That seems like a very good idea. I'll look into it.
> >>
> >>> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(),
> but
> >> behaves differently.
> >>>
> >>>> +/**
> >>>> + * Iterate over each lcore id's value for a lcore variable.
> >>>> + */
> >>>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
> >>>> +	for (unsigned int lcore_id =					\
> >>>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);
> 	\
> >>>> +	     lcore_id < RTE_MAX_LCORE;
> 	\
> >>>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id,
> name))
> >>>> +
> >>>
> >>> The macro name RTE_LCORE_VAR_FOREACH() resembles
> >> RTE_LCORE_FOREACH(i), which only iterates on running cores.
> >>> You might want to give it a name that differs more.
> >>>
> >>
> >> True.
> >>
> >> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
> >> confusion,
> >> for sure.
> >>
> >> Being consistent with <rte_lcore.h> is not so easy, since it's not
> even
> >> consistent with itself. For example, rte_lcore_count() returns the
> >> number of lcores (EAL threads) *plus the number of registered non-
> EAL
> >> threads*, and RTE_LCORE_FOREACH() give a different count. :)
> >
> > Naming is hard. I don't have a good name, and can only offer
> inspiration...
> >
> > <rte_lcore.h> has RTE_LCORE_FOREACH() and its
> RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.
> >
> > Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a
> variant.
> >
> >>
> >>> If it wasn't for API breakage, I would suggest renaming
> >> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
> >>>
> >>> Small detail: "var" is a pointer, so consider renaming it to "ptr"
> >> and adding _PTR to the macro name.
> >>
> >> The "var" name comes from how <sys/queue.h> names things. I think I
> had
> >> it as "ptr" initially. I'll change it back.
> >
> > Thanks.
> >
> >>
> >> Thanks a lot Morten.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v2 3/5] random: keep PRNG state in lcore variable
  2024-02-19  9:40     ` [RFC v2 3/5] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-02-19 11:22       ` Morten Brørup
  2024-02-19 14:04         ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-19 11:22 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Stephen Hemminger

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Monday, 19 February 2024 10.41
> 
> Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
> cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
> same state in a more cache-friendly lcore variable.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---

[...]

> @@ -19,14 +20,12 @@ struct rte_rand_state {
>  	uint64_t z3;
>  	uint64_t z4;
>  	uint64_t z5;
> -	RTE_CACHE_GUARD;
> -} __rte_cache_aligned;
> +};
> 
> -/* One instance each for every lcore id-equipped thread, and one
> - * additional instance to be shared by all others threads (i.e., all
> - * unregistered non-EAL threads).
> - */
> -static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
> +RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
> +
> +/* instance to be shared by all unregistered non-EAL threads */
> +static struct rte_rand_state unregistered_rand_state
> __rte_cache_aligned;

The unregistered_rand_state instance is still __rte_cache_aligned; consider also adding an RTE_CACHE_GUARD to it.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v2 3/5] random: keep PRNG state in lcore variable
  2024-02-19 11:22       ` Morten Brørup
@ 2024-02-19 14:04         ` Mattias Rönnblom
  2024-02-19 15:10           ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19 14:04 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-19 12:22, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Monday, 19 February 2024 10.41
>>
>> Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
>> cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
>> same state in a more cache-friendly lcore variable.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
> 
> [...]
> 
>> @@ -19,14 +20,12 @@ struct rte_rand_state {
>>   	uint64_t z3;
>>   	uint64_t z4;
>>   	uint64_t z5;
>> -	RTE_CACHE_GUARD;
>> -} __rte_cache_aligned;
>> +};
>>
>> -/* One instance each for every lcore id-equipped thread, and one
>> - * additional instance to be shared by all others threads (i.e., all
>> - * unregistered non-EAL threads).
>> - */
>> -static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
>> +RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
>> +
>> +/* instance to be shared by all unregistered non-EAL threads */
>> +static struct rte_rand_state unregistered_rand_state
>> __rte_cache_aligned;
> 
> The unregistered_rand_state instance is still __rte_cache_aligned; consider also adding an RTE_CACHE_GUARD to it.
> 

It shouldn't be cache-line aligned. I'll remove it. Thanks.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-19 11:10           ` Morten Brørup
@ 2024-02-19 14:31             ` Mattias Rönnblom
  2024-02-19 15:04               ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-19 14:31 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-19 12:10, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>> Sent: Monday, 19 February 2024 08.49
>>
>> On 2024-02-09 14:04, Morten Brørup wrote:
>>>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>>>> Sent: Friday, 9 February 2024 12.46
>>>>
>>>> On 2024-02-09 09:25, Morten Brørup wrote:
>>>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>>>>>> Sent: Thursday, 8 February 2024 19.17
>>>>>>
>>>>>> Introduce DPDK per-lcore id variables, or lcore variables for
>> short.
>>>>>>
>>>>>> An lcore variable has one value for every current and future lcore
>>>>>> id-equipped thread.
>>>>>>
>>>>>> The primary <rte_lcore_var.h> use case is for statically
>> allocating
>>>>>> small chunks of often-used data, which is related logically, but
>>>> where
>>>>>> there are performance benefits to reap from having updates being
>>>> local
>>>>>> to an lcore.
>>>>>>
>>>>>> Lcore variables are similar to thread-local storage (TLS, e.g.,
>> C11
>>>>>> _Thread_local), but decoupling the values' life time with that of
>>>> the
>>>>>> threads.
>>>>>>
>>>>>> Lcore variables are also similar in terms of functionality
>> provided
>>>> by
>>>>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>>>>>> build-time machinery. DPCPU uses linker scripts, which effectively
>>>>>> prevents the reuse of its, otherwise seemingly viable, approach.
>>>>>>
>>>>>> The currently-prevailing way to solve the same problem as lcore
>>>>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
>>>> sized
>>>>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>>>>>> lcore variables over this approach is that data related to the
>> same
>>>>>> lcore now is close (spatially, in memory), rather than data used
>> by
>>>>>> the same module, which in turn avoid excessive use of padding,
>>>>>> polluting caches with unused data.
>>>>>>
>>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>>> ---
>>>>>
>>>>> This looks very promising. :-)
>>>>>
>>>>> Here's a bunch of comments, questions and suggestions.
>>>>>
>>>>>
>>>>> * Question: Performance.
>>>>> What is the cost of accessing an lcore variable vs a variable in
>> TLS?
>>>>> I suppose the relative cost diminishes if the variable is a larger
>>>> struct, compared to a simple uint64_t.
>>>>>
>>>>
>>>> In case all the relevant data is available in a cache close to the
>>>> core,
>>>> both options carry quite low overhead.
>>>>
>>>> Accessing a lcore variable will always require a TLS lookup, in the
>>>> form
>>>> of retrieving the lcore_id of the current thread. In that sense,
>> there
>>>> will likely be a number of extra instructions required to do the
>> lcore
>>>> variable address lookup (i.e., doing the load from rte_lcore_var
>> table
>>>> based on the lcore_id you just looked up, and adding the variable's
>>>> offset).
>>>>
>>>> A TLS lookup will incur an extra overhead of less than a clock
>> cycle,
>>>> compared to accessing a non-TLS static variable, in case static
>> linking
>>>> is used. For shared objects, TLS is much more expensive (something
>>>> often
>>>> visible in dynamically linked DPDK app flame graphs, in the form of
>> the
>>>> __tls_addr symbol). Then you need to add ~3 cc/access. This on a
>> micro
>>>> benchmark running on a x86_64 Raptor Lake P-core.
>>>>
>>>> (To visialize the difference between shared object and not, one can
>> use
>>>> Compiler Explorer and -fPIC versus -fPIE.)
>>>>
>>>> Things get more complicated if you access the same variable in the
>> same
>>>> section code, since then it can be left on the stack/in a register
>> by
>>>> the compiler, especially if LTO is used. In other words, if you do
>>>> rte_lcore_id() several times in a row, only the first one will cost
>> you
>>>> anything. This happens fairly often in DPDK, with rte_lcore_id().
>>>>
>>>> Finally, if you do something like
>>>>
>>>> diff --git a/lib/eal/common/rte_random.c
>> b/lib/eal/common/rte_random.c
>>>> index af9fffd81b..a65c30d27e 100644
>>>> --- a/lib/eal/common/rte_random.c
>>>> +++ b/lib/eal/common/rte_random.c
>>>> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state
>> *state)
>>>>     static __rte_always_inline
>>>>     struct rte_rand_state *__rte_rand_get_state(void)
>>>>     {
>>>> -       unsigned int idx;
>>>> -
>>>> -       idx = rte_lcore_id();
>>>> -
>>>> -       if (unlikely(idx == LCORE_ID_ANY))
>>>> -               return &unregistered_rand_state;
>>>> -
>>>> -       return RTE_LCORE_VAR_PTR(rand_state);
>>>> +       return &unregistered_rand_state;
>>>>     }
>>>>
>>>>     uint64_t
>>>>
>>>> ...and re-run the rand_perf_autotest, at least I see no difference
>> at
>>>> all (in a statically linked build). Both results in rte_rand() using
>>>> ~11
>>>> cc/call. What that suggests is that TLS overhead is very small, and
>>>> that
>>>> any extra instructions required by lcore variables doesn't add much,
>> if
>>>> anything at all, at least in this particular case.
>>>
>>> Excellent. Thank you for a thorough and detailed answer, Mattias.
>>>
>>>>
>>>>> Some of my suggestions below might also affect performance.
>>>>>
>>>>>
>>>>> * Advantage: Provides direct access to worker thread variables.
>>>>> With the current alternative (thread-local storage), the main
>> thread
>>>> cannot access the TLS variables of the worker threads,
>>>>> unless worker threads publish global access pointers.
>>>>> Lcore variables of any lcore thread can be direcctly accessed by
>> any
>>>> thread, which simplifies code.
>>>>>
>>>>>
>>>>> * Advantage: Roadmap towards hugemem.
>>>>> It would be nice if the lcore variable memory was allocated in
>>>> hugemem, to reduce TLB misses.
>>>>> The current alternative (thread-local storage) is also not using
>>>> hugement, so not a degradation.
>>>>>
>>>>
>>>> I agree, but the thing is it's hard to figure out how much memory is
>>>> required for these kind of variables, given how DPDK is built and
>>>> linked. In an OS kernel, you can just take all the symbols, put them
>> in
>>>> a special section, and size that section. Such a thing can't easily
>> be
>>>> done with DPDK, since shared object builds are supported, plus that
>>>> this
>>>> facility should be available not only to DPDK modules, but also the
>>>> application, so relying on linker scripts isn't really feasible (not
>>>> probably not even feasible for DPDK itself).
>>>>
>>>> In that scenario, you want to size up the per-lcore buffer to be so
>>>> large, you don't have to worry about overruns. That will waste
>> memory.
>>>> If you use huge page memory, paging can't help you to avoid
>>>> pre-allocating actual physical memory.
>>>
>>> Good point.
>>> I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE),
>> but I hadn't considered how paging helps us use less physical memory
>> than that.
>>>
>>>>
>>>> That said, even large (by static per-lcore data standards) buffers
>> are
>>>> potentially small enough not to grow the amount of memory used by a
>>>> DPDK
>>>> process too much. You need to provision for RTE_MAX_LCORE of them
>>>> though.
>>>>
>>>> The value of lcore variables should be small, and thus incur few TLB
>>>> misses, so you may not gain much from huge pages. In my world, it's
>>>> more
>>>> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
>>>> rather than the easier "fitting often-used per-lcore data into a
>>>> working
>>>> set size reasonably expected to be covered by hardware TLB/caches".
>>>
>>> Yes, I suppose that lcore variables are intended to be small, and
>> large per-lcore structures should keep following the current design
>> patterns for allocation and access.
>>>
>>
>> It seems to me that support for per-lcore heaps should be the solution
>> for supporting use cases requiring many, larger and/or dynamic objects
>> on a per-lcore basis.
>>
>> Ideally, you would design both that mechanism and lcore variables
>> together, but then if you couple enough amount of improvements together
>> you will never get anywhere. An instance of where perfect is the enemy
>> of good, perhaps.
> 
> So true. :-)
> 
>>
>>> Perhaps this guideline is worth mentioning in the documentation.
>>>
>>
>> What is missing, more specifically? The size limitation and the static
>> nature of lcore variables is described, and what current design
>> patterns
>> they expected to (partly) replace is also covered.
> 
> Your documentation is fine, and nothing specific is missing here.
> I was thinking out loud that the high level DPDK documentation should describe common design patterns.
> 
>>
>>>>
>>>>> Lcore variables are available very early at startup, so I guess the
>>>> RTE memory allocator is not yet available.
>>>>> Hugemem could be allocated using O/S allocation, so there is a
>>>> possible road towards using hugemem.
>>>>>
>>>>
>>>> With the current design, that true. I'm not sure it's a strict
>>>> requirement though, but it does makes things simpler.
>>>>
>>>>> Either way, using hugement would require one more indirection (the
>>>> pointer to the allocated hugemem).
>>>>> I don't know which has better performance, using hugemem or
>> avoiding
>>>> the additional pointer dereferencing.
>>>>>
>>>>>
>>>>> * Suggestion: Consider adding an entry for unregistered non-EAL
>>>> threads.
>>>>> Please consider making room for one more entry, shared by all
>>>> unregistered non-EAL threads, i.e.
>>>>> making the array size RTE_MAX_LCORE + 1 and indexing by
>>>> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
>>>>>
>>>>> It would be convenient for the use cases where a variable shared by
>>>> the unregistered non-EAL threads don't need special treatment.
>>>>>
>>>>
>>>> I thought about this, but it would require a conditional in the
>> lookup
>>>> macro, as you show. More importantly, it would make the whole
>>>> <rte_lcore_var.h> thing less elegant and harder to understand. It's
>> bad
>>>> enough that "per-lcore" is actually "per-lcore id" (or the
>> equivalent
>>>> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's
>> <what
>>>> I said before> + 1" is not an improvement.
>>>
>>> We could promote "one more entry for unregistered non-EAL threads"
>> design pattern (for relevant use cases only!) by extending EAL with one
>> more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE
>> when _tread_id is set to -1:
>>>
>>> +++ eal_common_thread.c:
>>>     RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
>>> + RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;
> 
> Ups... wrong reference! I meant to refer to _lcore_id, not _thread_id. Correction:
> 

OK. I subconsciously ignored this mistake, and read it as "_lcore_id".

> We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _lcore_id, but set to RTE_MAX_LCORE when _lcore_id is set to LCORE_ID_ANY:
> 
> +++ eal_common_thread.c:
>    RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
> + RTE_DEFINE_PER_LCORE(unsigned int, _lcore_idx) = RTE_MAX_LCORE;
> 
>>>
>>> and
>>>
>>> +++ rte_lcore.h:
>>> static inline unsigned
>>> rte_lcore_id(void)
>>> {
>>> 	return RTE_PER_LCORE(_lcore_id);
>>> }
>>> + static inline unsigned
>>> + rte_lcore_idx(void)
>>> + {
>>> + 	return RTE_PER_LCORE(_lcore_idx);
>>> + }
>>>
>>> That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ?
>> rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.
>>>
>>
>> Wouldn't that effectively give a shared lcore id to all unregistered
>> threads?
> 
> Yes, just like the rte_lcore_id() is LCORE_ID_ANY (i.e. UINT32_MAX) for all unregistered threads; but it will be usable for array indexing, behaving as a shadow variable of RTE_PER_LCORE(_lcore_id) for optimizing away the "rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE" when indexing.
> 
>>
>> We definitely shouldn't further complicate anything related to the DPDK
>> threading model, in my opinion.
>>
>> If a module needs one or more variable instances that aren't per lcore,
>> use regular static allocation instead. I would favor clarity over
>> convenience here, at least until we know better (see below as well).
>>
>>>>
>>>> But useful? Sure.
>>>>
>>>> I think you may still need other data for dealing with unregistered
>>>> threads, for example a mutex or spin lock to deal with concurrency
>>>> issues that arises with shared data.
>>>
>>> Adding the extra entry is only for the benefit of use cases where
>> special handling is not required. It will make the code for those use
>> cases much cleaner. I think it is useful.
>>>
>>
>> It will make it shorter, but not less clean, I would argue.
>>
>>> Use cases requiring special handling should still do the special
>> handling they do today.
>>>
>>
>> For DPDK modules using lcore variables and which treat unregistered
>> threads as "full citizens", I expect special handling of unregistered
>> threads to be the norm. Take rte_random.h as an example. Current API
>> does not guarantee MT safety for concurrent calls of unregistered
>> threads. It probably should, and it should probably be by means of a
>> mutex (not spinlock).
>>
>> The reason I'm not running off to make a rte_random.c patch is that's
>> it's unclear to me what is the role of unregistered threads in DPDK.
>> I'm
>> reasonably comfortable with a model where there are many threads that
>> basically don't interact with the DPDK APIs (except maybe some very
>> narrow exposure, like the preemption-safe ring variant). One example of
>> such a design would be big slow control plane which uses multi-
>> threading
>> and the Linux process scheduler for work scheduling, hosted in the same
>> process as a DPDK data plane app.
>>
>> What I find more strange is a scenario where there are unregistered
>> threads which interacts with a wide variety of DPDK APIs, does so
>> at-high-rates/with-high-performance-requirements and are expected to be
>> preemption-safe. So they are basically EAL threads without a lcore id.
> 
> Yes, this is happening in the wild.
> E.g. our application has a mode where it uses fewer EAL threads, and processes more in non-EAL threads. So to say, the same work is processed either by an EAL thread or a non-EAL thread, depending on the application's mode.
> The extra array entry would be useful for such use cases.
> 

Is there some particular reason you can't register those non-EAL threads?

>>
>> Support for that latter scenario has also been voiced, in previous
>> discussions, from what I recall.
>>
>> I think it's hard to answer the question of a "unregistered thread
>> spare" for lcore variables without first knowing what the future should
>> look like for unregistered threads in DPDK, in terms of being able to
>> call into DPDK APIs, preemption-safety guarantees, etc.
>>
>> It seems that until you have a clearer picture of how generally to
>> treat
>> unregistered threads, you are best off with just a per-lcore id
>> instance
>> of lcore variables.
> 
> I get your point. It also reduces the risk of bugs caused by incorrect use of the additional entry.
> 
> I am arguing for a different angle: Providing the extra entry will help uncovering relevant use cases.
> 

Maybe have two "spares" in case you find two new uses cases? :)

No, adding spares doesn't work, unless you rework the API and rename it 
to fit the new purpose of not only providing per-lcore id variables, but 
per-something-else.

>>
>>>>
>>>> There may also be cases were you are best off by simply disallowing
>>>> unregistered threads from calling into that API.
>>>>
>>>>> Obviously, this might affect performance.
>>>>> If the performance cost is not negligble, the addtional entry (and
>>>> indexing branch) could be disabled at build time.
>>>>>
>>>>>
>>>>> * Suggestion: Do not fix the alignment at 16 byte.
>>>>> Pass an alignment parameter to rte_lcore_var_alloc() and use
>>>> alignof() when calling it:
>>>>>
>>>>> +#include <stdalign.h>
>>>>> +
>>>>> +#define RTE_LCORE_VAR_ALLOC(name)			\
>>>>> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
>>>>> +
>>>>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)
>> 	\
>>>>> +	name = rte_lcore_var_alloc(size, alignment)
>>>>> +
>>>>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
>>>>> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
>>>>> +
>>>>> + +++ /cconfig/rte_config.h
>>>>> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
>>>>>
>>>>>
>>>>
>>>> That seems like a very good idea. I'll look into it.
>>>>
>>>>> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(),
>> but
>>>> behaves differently.
>>>>>
>>>>>> +/**
>>>>>> + * Iterate over each lcore id's value for a lcore variable.
>>>>>> + */
>>>>>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
>>>>>> +	for (unsigned int lcore_id =					\
>>>>>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);
>> 	\
>>>>>> +	     lcore_id < RTE_MAX_LCORE;
>> 	\
>>>>>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id,
>> name))
>>>>>> +
>>>>>
>>>>> The macro name RTE_LCORE_VAR_FOREACH() resembles
>>>> RTE_LCORE_FOREACH(i), which only iterates on running cores.
>>>>> You might want to give it a name that differs more.
>>>>>
>>>>
>>>> True.
>>>>
>>>> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
>>>> confusion,
>>>> for sure.
>>>>
>>>> Being consistent with <rte_lcore.h> is not so easy, since it's not
>> even
>>>> consistent with itself. For example, rte_lcore_count() returns the
>>>> number of lcores (EAL threads) *plus the number of registered non-
>> EAL
>>>> threads*, and RTE_LCORE_FOREACH() give a different count. :)
>>>
>>> Naming is hard. I don't have a good name, and can only offer
>> inspiration...
>>>
>>> <rte_lcore.h> has RTE_LCORE_FOREACH() and its
>> RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.
>>>
>>> Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a
>> variant.
>>>
>>>>
>>>>> If it wasn't for API breakage, I would suggest renaming
>>>> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
>>>>>
>>>>> Small detail: "var" is a pointer, so consider renaming it to "ptr"
>>>> and adding _PTR to the macro name.
>>>>
>>>> The "var" name comes from how <sys/queue.h> names things. I think I
>> had
>>>> it as "ptr" initially. I'll change it back.
>>>
>>> Thanks.
>>>
>>>>
>>>> Thanks a lot Morten.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC 1/5] eal: add static per-lcore memory allocation facility
  2024-02-19 14:31             ` Mattias Rönnblom
@ 2024-02-19 15:04               ` Morten Brørup
  0 siblings, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-02-19 15:04 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Monday, 19 February 2024 15.32
> 
> On 2024-02-19 12:10, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >> Sent: Monday, 19 February 2024 08.49
> >>
> >> On 2024-02-09 14:04, Morten Brørup wrote:
> >>>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >>>> Sent: Friday, 9 February 2024 12.46
> >>>>
> >>>> On 2024-02-09 09:25, Morten Brørup wrote:
> >>>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >>>>>> Sent: Thursday, 8 February 2024 19.17
> >>>>>>
> >>>>>> Introduce DPDK per-lcore id variables, or lcore variables for
> >> short.
> >>>>>>
> >>>>>> An lcore variable has one value for every current and future
> lcore
> >>>>>> id-equipped thread.
> >>>>>>
> >>>>>> The primary <rte_lcore_var.h> use case is for statically
> >> allocating
> >>>>>> small chunks of often-used data, which is related logically, but
> >>>> where
> >>>>>> there are performance benefits to reap from having updates being
> >>>> local
> >>>>>> to an lcore.
> >>>>>>
> >>>>>> Lcore variables are similar to thread-local storage (TLS, e.g.,
> >> C11
> >>>>>> _Thread_local), but decoupling the values' life time with that
> of
> >>>> the
> >>>>>> threads.
> >>>>>>
> >>>>>> Lcore variables are also similar in terms of functionality
> >> provided
> >>>> by
> >>>>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >>>>>> build-time machinery. DPCPU uses linker scripts, which
> effectively
> >>>>>> prevents the reuse of its, otherwise seemingly viable, approach.
> >>>>>>
> >>>>>> The currently-prevailing way to solve the same problem as lcore
> >>>>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
> >>>> sized
> >>>>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit
> of
> >>>>>> lcore variables over this approach is that data related to the
> >> same
> >>>>>> lcore now is close (spatially, in memory), rather than data used
> >> by
> >>>>>> the same module, which in turn avoid excessive use of padding,
> >>>>>> polluting caches with unused data.
> >>>>>>
> >>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>>>> ---

[...]

> > Ups... wrong reference! I meant to refer to _lcore_id, not
> _thread_id. Correction:
> >
> 
> OK. I subconsciously ignored this mistake, and read it as "_lcore_id".

:-)

[...]

> >> For DPDK modules using lcore variables and which treat unregistered
> >> threads as "full citizens", I expect special handling of
> unregistered
> >> threads to be the norm. Take rte_random.h as an example. Current API
> >> does not guarantee MT safety for concurrent calls of unregistered
> >> threads. It probably should, and it should probably be by means of a
> >> mutex (not spinlock).
> >>
> >> The reason I'm not running off to make a rte_random.c patch is
> that's
> >> it's unclear to me what is the role of unregistered threads in DPDK.
> >> I'm
> >> reasonably comfortable with a model where there are many threads
> that
> >> basically don't interact with the DPDK APIs (except maybe some very
> >> narrow exposure, like the preemption-safe ring variant). One example
> of
> >> such a design would be big slow control plane which uses multi-
> >> threading
> >> and the Linux process scheduler for work scheduling, hosted in the
> same
> >> process as a DPDK data plane app.
> >>
> >> What I find more strange is a scenario where there are unregistered
> >> threads which interacts with a wide variety of DPDK APIs, does so
> >> at-high-rates/with-high-performance-requirements and are expected to
> be
> >> preemption-safe. So they are basically EAL threads without a lcore
> id.
> >
> > Yes, this is happening in the wild.
> > E.g. our application has a mode where it uses fewer EAL threads, and
> processes more in non-EAL threads. So to say, the same work is
> processed either by an EAL thread or a non-EAL thread, depending on the
> application's mode.
> > The extra array entry would be useful for such use cases.
> >
> 
> Is there some particular reason you can't register those non-EAL
> threads?

Legacy. I suppose we could just do that instead.
Thanks for the suggestion!

> 
> >>
> >> Support for that latter scenario has also been voiced, in previous
> >> discussions, from what I recall.
> >>
> >> I think it's hard to answer the question of a "unregistered thread
> >> spare" for lcore variables without first knowing what the future
> should
> >> look like for unregistered threads in DPDK, in terms of being able
> to
> >> call into DPDK APIs, preemption-safety guarantees, etc.
> >>
> >> It seems that until you have a clearer picture of how generally to
> >> treat
> >> unregistered threads, you are best off with just a per-lcore id
> >> instance
> >> of lcore variables.
> >
> > I get your point. It also reduces the risk of bugs caused by
> incorrect use of the additional entry.
> >
> > I am arguing for a different angle: Providing the extra entry will
> help uncovering relevant use cases.
> >
> 
> Maybe have two "spares" in case you find two new uses cases? :)
> 
> No, adding spares doesn't work, unless you rework the API and rename it
> to fit the new purpose of not only providing per-lcore id variables,
> but per-something-else.
> 

OK. I'm convinced.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v2 3/5] random: keep PRNG state in lcore variable
  2024-02-19 14:04         ` Mattias Rönnblom
@ 2024-02-19 15:10           ` Morten Brørup
  0 siblings, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-02-19 15:10 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Monday, 19 February 2024 15.04
> 
> On 2024-02-19 12:22, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Monday, 19 February 2024 10.41
> >>
> >> Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
> >> cache-aligned and RTE_CACHE_GUARDed struct instances with keeping
> the
> >> same state in a more cache-friendly lcore variable.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> ---
> >
> > [...]
> >
> >> @@ -19,14 +20,12 @@ struct rte_rand_state {
> >>   	uint64_t z3;
> >>   	uint64_t z4;
> >>   	uint64_t z5;
> >> -	RTE_CACHE_GUARD;
> >> -} __rte_cache_aligned;
> >> +};
> >>
> >> -/* One instance each for every lcore id-equipped thread, and one
> >> - * additional instance to be shared by all others threads (i.e.,
> all
> >> - * unregistered non-EAL threads).
> >> - */
> >> -static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
> >> +RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
> >> +
> >> +/* instance to be shared by all unregistered non-EAL threads */
> >> +static struct rte_rand_state unregistered_rand_state
> >> __rte_cache_aligned;
> >
> > The unregistered_rand_state instance is still __rte_cache_aligned;
> consider also adding an RTE_CACHE_GUARD to it.
> >
> 
> It shouldn't be cache-line aligned. I'll remove it. Thanks.

Agreed; that fix is just as good. Then,

Acked-by: Morten Brørup <mb@smartsharesystems.com>


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v3 0/6] Lcore variables
  2024-02-19  9:40     ` [RFC v2 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-20  8:49       ` Mattias Rönnblom
  2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                           ` (5 more replies)
  0 siblings, 6 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20  8:49 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

One thing is unclear to the author is how this API relates to
potential future per-lcore dynamic allocator (e.g., a per-lcore heap).

Contrary to what the version.map edit suggests, this RFC is not meant
for a proposal for DPDK 24.03.

Mattias Rönnblom (6):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 app/test/meson.build                  |   1 +
 app/test/test_lcore_var.c             | 407 ++++++++++++++++++++++++++
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  82 ++++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/common/rte_random.c           |  30 +-
 lib/eal/common/rte_service.c          | 119 ++++----
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 lib/eal/x86/rte_power_intrinsics.c    |  17 +-
 lib/power/rte_power_pmd_mgmt.c        |  36 ++-
 13 files changed, 987 insertions(+), 88 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
@ 2024-02-20  8:49         ` Mattias Rönnblom
  2024-02-20  9:11           ` Bruce Richardson
                             ` (3 more replies)
  2024-02-20  8:49         ` [RFC v3 2/6] eal: add lcore variable test suite Mattias Rönnblom
                           ` (4 subsequent siblings)
  5 siblings, 4 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20  8:49 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small chunks of often-used data, which is related logically, but where
there are performance benefits to reap from having updates being local
to an lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  82 ++++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 7 files changed, 465 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/config/rte_config.h b/config/rte_config.h
index da265d7dd2..884482e473 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -30,6 +30,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..bb06bb7ca1 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -98,6 +98,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..dfd11cbd0b
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define WARN_THRESHOLD 75
+
+/*
+ * Avoid using offset zero, since it would result in a NULL-value
+ * "handle" (offset) pointer, which in principle and per the API
+ * definition shouldn't be an issue, but may confuse some tools and
+ * users.
+ */
+#define INITIAL_OFFSET 1
+
+char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
+
+static uintptr_t allocated = INITIAL_OFFSET;
+
+static void
+verify_allocation(uintptr_t new_allocated)
+{
+	static bool has_warned;
+
+	RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
+
+	if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
+	    !has_warned) {
+		EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
+			"of the maximum capacity (%d bytes)", WARN_THRESHOLD,
+			RTE_MAX_LCORE_VAR);
+		has_warned = true;
+	}
+}
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, align);
+
+	void *offset = (void *)new_allocated;
+
+	new_allocated += size;
+
+	verify_allocation(new_allocated);
+
+	allocated = new_allocated;
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return offset;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to aligned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..da49d48d7c
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,375 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. In other words,
+ * there's one copy of its value for each and every current and future
+ * lcore id-equipped thread, with the total number of copies amounting
+ * to \c RTE_MAX_LCORE.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for \c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. A handle may be passed between modules and
+ * threads just like any pointer, but its value is not the address of
+ * any particular object, but rather just an opaque identifier, stored
+ * in a typed pointer (to inform the access macro the type of values).
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
+ *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but is should
+ * generally only *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids *may* be frequently read or written by the owners without the
+ * risk of false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomics) should
+ * employed to assure there are no data races between the owning
+ * thread and any non-owner threads accessing the same lcore variable
+ * instance.
+ *
+ * The value of the lcore variable for a particular lcore id may be
+ * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
+ * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * To modify the value of an lcore variable for a particular lcore id,
+ * either access the object through the pointer retrieved by \ref
+ * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
+ * RTE_LCORE_VAR_LCORE_SET.
+ *
+ * The access macros each has a short-hand which may be used by an EAL
+ * thread or registered non-EAL thread to access the lcore variable
+ * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
+ * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
+ *
+ * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier. The
+ * *identifier* value is common across all lcore ids.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like \c int,
+ * but would more typically be a \c struct. An application may choose
+ * to define an lcore variable, which it then it goes on to never
+ * allocate.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * The sum of all lcore variables, plus any padding required, must be
+ * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
+ * violation of this maximum results in the process being terminated.
+ *
+ * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
+ * same order of magnitude in size as a thread stack.
+ *
+ * The lcore variable storage buffers are kept in the BSS section in
+ * the resulting binary, where data generally isn't mapped in until
+ * it's accessed. This means that unused portions of the lcore
+ * variable storage area will not occupy any physical memory (with a
+ * granularity of the memory page size [usually 4 kB]).
+ *
+ * Lcore variables should generally *not* be \ref __rte_cache_aligned
+ * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, all nearby data structures
+ * should almost-always be written to by a single thread (the lcore
+ * variable owner). Adding padding will increase the effective memory
+ * working set size, and potentially reducing performance.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         unsigned int lcore_id;
+ *
+ *         RTE_LCORE_VAR_ALLOC(foo_state);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * \endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * } __rte_cache_aligned;
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * \endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to \ref rte_lcore_var.h is the \ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., \ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. One effect of this is thread-local variables must
+ *     initialized in a "lazy" manner (e.g., at the point of thread
+ *     creation). Lcore variables may be accessed immediately after
+ *     having been allocated (which is usually prior any thread beyond
+ *     the main thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id, and thus
+ *     not for such "regular" threads.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align)	\
+	name = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
+	name = rte_lcore_var_alloc(size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handler pointer type, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC(name)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, sizeof(*(name)),		\
+				       alignof(typeof(*(name))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a \ref
+ * RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)		\
+	((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)				\
+	((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+/**
+ * Get value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))
+
+/**
+ * Set the value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
+
+/**
+ * Get value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
+
+/**
+ * Set value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_SET(name, value) \
+	RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(var, name)				\
+	for (unsigned int lcore_id =					\
+		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
+
+/**
+ * Allocate space in the per-lcore id buffers for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than \c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The id of the variable, stored in a void pointer value.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 5e0cd47c82..e90b86115a 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -393,6 +393,10 @@ EXPERIMENTAL {
 	# added in 23.07
 	rte_memzone_max_get;
 	rte_memzone_max_set;
+
+	# added in 24.03
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v3 2/6] eal: add lcore variable test suite
  2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
  2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-20  8:49         ` Mattias Rönnblom
  2024-02-20  8:49         ` [RFC v3 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
                           ` (3 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20  8:49 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Add test suite to exercise the <rte_lcore_var.h> API.

RFC v2:
 * Improve alignment-related test coverage.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 407 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 408 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 6389ae83ee..93412cce51 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -101,6 +101,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..27084e91e9
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,407 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static bool
+rand_bool(void)
+{
+	return rte_rand() & 1;
+}
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_PTR(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal;
+
+	if (rand_bool())
+		equal = RTE_LCORE_VAR_GET(test_int) == state->old_value;
+	else
+		equal = *(RTE_LCORE_VAR_PTR(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	if (rand_bool())
+		RTE_LCORE_VAR_SET(test_int, state->new_value);
+	else
+		*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		RTE_LCORE_VAR_LCORE_SET(lcore_id, test_int, state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		TEST_ASSERT_EQUAL(state->new_value,
+				  RTE_LCORE_VAR_LCORE_GET(lcore_id, test_int),
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_PTR(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_struct);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_PTR(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(RTE_LCORE_VAR_LCORE_GET(lcore_id, test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_array);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (RTE_MAX_LCORE_VAR / 2)
+
+static int
+test_many_lvars(void)
+{
+	void **handlers = malloc(sizeof(void *) * MANY_LVARS);
+	int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		void *handle = rte_lcore_var_alloc(1, 1);
+
+		uint8_t *b = __RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), handle);
+
+		*b = (uint8_t)i;
+
+		handlers[i] = handle;
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_FOREACH_WORKER(lcore_id) {
+			uint8_t *b = __RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(),
+							       handlers[i]);
+			TEST_ASSERT_EQUAL((uint8_t)i, *b,
+					  "Unexpected lcore variable value.");
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v3 3/6] random: keep PRNG state in lcore variable
  2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
  2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-20  8:49         ` [RFC v3 2/6] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-02-20  8:49         ` Mattias Rönnblom
  2024-02-20 15:31           ` Morten Brørup
  2024-02-20  8:49         ` [RFC v3 4/6] power: keep per-lcore " Mattias Rönnblom
                           ` (2 subsequent siblings)
  5 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20  8:49 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/rte_random.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 7709b8f2c6..adbbf13f0e 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct rte_rand_state {
@@ -19,14 +20,12 @@ struct rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
-} __rte_cache_aligned;
+};
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_PTR(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v3 4/6] power: keep per-lcore state in lcore variable
  2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
                           ` (2 preceding siblings ...)
  2024-02-20  8:49         ` [RFC v3 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-02-20  8:49         ` Mattias Rönnblom
  2024-02-20  8:49         ` [RFC v3 5/6] service: " Mattias Rönnblom
  2024-02-20  8:49         ` [RFC v3 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20  8:49 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

RFC v3:
 * Replace for loop with FOREACH macro.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/power/rte_power_pmd_mgmt.c | 36 ++++++++++++++++------------------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 591fc69f36..ea30454895 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -68,8 +69,8 @@ struct pmd_core_cfg {
 	/**< Number of queues ready to enter power optimized state */
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
-} __rte_cache_aligned;
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+};
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v3 5/6] service: keep per-lcore state in lcore variable
  2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
                           ` (3 preceding siblings ...)
  2024-02-20  8:49         ` [RFC v3 4/6] power: keep per-lcore " Mattias Rönnblom
@ 2024-02-20  8:49         ` Mattias Rönnblom
  2024-02-22  9:42           ` Morten Brørup
  2024-02-20  8:49         ` [RFC v3 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20  8:49 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/common/rte_service.c | 119 ++++++++++++++++++++---------------
 1 file changed, 68 insertions(+), 51 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d959c91459..de205c5da5 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,11 +102,12 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
+	else {
+		struct core_state *cs;
+		RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+			memset(cs, 0, sizeof(struct core_state));
 	}
 
 	int i;
@@ -122,7 +124,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +137,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +286,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +293,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +454,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +467,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +489,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +535,16 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs =
+		RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +552,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +573,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +590,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +642,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +694,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +712,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +737,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +761,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +785,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +815,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +824,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +849,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +860,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +868,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +876,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +885,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +901,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +948,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +977,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +989,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1028,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v3 6/6] eal: keep per-lcore power intrinsics state in lcore variable
  2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
                           ` (4 preceding siblings ...)
  2024-02-20  8:49         ` [RFC v3 5/6] service: " Mattias Rönnblom
@ 2024-02-20  8:49         ` Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20  8:49 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 532a2e646b..f4659af77e 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -4,6 +4,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -12,10 +13,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} __rte_cache_aligned wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -170,7 +175,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_PTR(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -262,7 +267,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_PTR(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -301,8 +306,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_PTR(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-20  9:11           ` Bruce Richardson
  2024-02-20 10:47             ` Mattias Rönnblom
  2024-02-21  9:43           ` Jerin Jacob
                             ` (2 subsequent siblings)
  3 siblings, 1 reply; 185+ messages in thread
From: Bruce Richardson @ 2024-02-20  9:11 UTC (permalink / raw)
  To: Mattias Rönnblom; +Cc: dev, hofors, Morten Brørup, Stephen Hemminger

On Tue, Feb 20, 2024 at 09:49:03AM +0100, Mattias Rönnblom wrote:
> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small chunks of often-used data, which is related logically, but where
> there are performance benefits to reap from having updates being local
> to an lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> RFC v3:
>  * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>  * Update example to reflect FOREACH macro name change (in RFC v2).
> 
> RFC v2:
>  * Use alignof to derive alignment requirements. (Morten Brørup)
>  * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>    *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>  * Allow user-specified alignment, but limit max to cache line size.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---
>  config/rte_config.h                   |   1 +
>  doc/api/doxy-api-index.md             |   1 +
>  lib/eal/common/eal_common_lcore_var.c |  82 ++++++
>  lib/eal/common/meson.build            |   1 +
>  lib/eal/include/meson.build           |   1 +
>  lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
>  lib/eal/version.map                   |   4 +
>  7 files changed, 465 insertions(+)
>  create mode 100644 lib/eal/common/eal_common_lcore_var.c
>  create mode 100644 lib/eal/include/rte_lcore_var.h
> 
> diff --git a/config/rte_config.h b/config/rte_config.h
> index da265d7dd2..884482e473 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -30,6 +30,7 @@
>  /* EAL defines */
>  #define RTE_CACHE_GUARD_LINES 1
>  #define RTE_MAX_HEAPS 32
> +#define RTE_MAX_LCORE_VAR 1048576
>  #define RTE_MAX_MEMSEG_LISTS 128
>  #define RTE_MAX_MEMSEG_PER_LIST 8192
>  #define RTE_MAX_MEM_MB_PER_LIST 32768
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index a6a768bd7c..bb06bb7ca1 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -98,6 +98,7 @@ The public API headers are grouped by topics:
>    [interrupts](@ref rte_interrupts.h),
>    [launch](@ref rte_launch.h),
>    [lcore](@ref rte_lcore.h),
> +  [lcore-varible](@ref rte_lcore_var.h),
>    [per-lcore](@ref rte_per_lcore.h),
>    [service cores](@ref rte_service.h),
>    [keepalive](@ref rte_keepalive.h),
> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
> new file mode 100644
> index 0000000000..dfd11cbd0b
> --- /dev/null
> +++ b/lib/eal/common/eal_common_lcore_var.c
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#include <inttypes.h>
> +
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +
> +#include <rte_lcore_var.h>
> +
> +#include "eal_private.h"
> +
> +#define WARN_THRESHOLD 75
> +
> +/*
> + * Avoid using offset zero, since it would result in a NULL-value
> + * "handle" (offset) pointer, which in principle and per the API
> + * definition shouldn't be an issue, but may confuse some tools and
> + * users.
> + */
> +#define INITIAL_OFFSET 1
> +
> +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
> +

While I like the idea of improved handling for per-core variables, my main
concern with this set is this definition here, which adds yet another
dependency on the compile-time defined RTE_MAX_LCORE value.

I believe we already have an issue with this #define where it's impossible
to come up with a single value that works for all, or nearly all cases. The
current default is still 128, yet DPDK needs to support systems where the
number of cores is well into the hundreds, requiring workarounds of core
mappings or customized builds of DPDK. Upping the value fixes those issues
at the cost to memory footprint explosion for smaller systems.

I'm therefore nervous about putting more dependencies on this value, when I
feel we should be moving away from its use, to allow more runtime
configurability of cores.

For this set/feature, would it be possible to have a run-time allocated
(and sized) array rather than using the RTE_MAX_LCORE value?

Thanks, (and apologies for the mini-rant!)

/Bruce

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20  9:11           ` Bruce Richardson
@ 2024-02-20 10:47             ` Mattias Rönnblom
  2024-02-20 11:39               ` Bruce Richardson
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20 10:47 UTC (permalink / raw)
  To: Bruce Richardson, Mattias Rönnblom
  Cc: dev, Morten Brørup, Stephen Hemminger

On 2024-02-20 10:11, Bruce Richardson wrote:
> On Tue, Feb 20, 2024 at 09:49:03AM +0100, Mattias Rönnblom wrote:
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small chunks of often-used data, which is related logically, but where
>> there are performance benefits to reap from having updates being local
>> to an lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> RFC v3:
>>   * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>>   * Update example to reflect FOREACH macro name change (in RFC v2).
>>
>> RFC v2:
>>   * Use alignof to derive alignment requirements. (Morten Brørup)
>>   * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>>     *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>>   * Allow user-specified alignment, but limit max to cache line size.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
>>   config/rte_config.h                   |   1 +
>>   doc/api/doxy-api-index.md             |   1 +
>>   lib/eal/common/eal_common_lcore_var.c |  82 ++++++
>>   lib/eal/common/meson.build            |   1 +
>>   lib/eal/include/meson.build           |   1 +
>>   lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
>>   lib/eal/version.map                   |   4 +
>>   7 files changed, 465 insertions(+)
>>   create mode 100644 lib/eal/common/eal_common_lcore_var.c
>>   create mode 100644 lib/eal/include/rte_lcore_var.h
>>
>> diff --git a/config/rte_config.h b/config/rte_config.h
>> index da265d7dd2..884482e473 100644
>> --- a/config/rte_config.h
>> +++ b/config/rte_config.h
>> @@ -30,6 +30,7 @@
>>   /* EAL defines */
>>   #define RTE_CACHE_GUARD_LINES 1
>>   #define RTE_MAX_HEAPS 32
>> +#define RTE_MAX_LCORE_VAR 1048576
>>   #define RTE_MAX_MEMSEG_LISTS 128
>>   #define RTE_MAX_MEMSEG_PER_LIST 8192
>>   #define RTE_MAX_MEM_MB_PER_LIST 32768
>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>> index a6a768bd7c..bb06bb7ca1 100644
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -98,6 +98,7 @@ The public API headers are grouped by topics:
>>     [interrupts](@ref rte_interrupts.h),
>>     [launch](@ref rte_launch.h),
>>     [lcore](@ref rte_lcore.h),
>> +  [lcore-varible](@ref rte_lcore_var.h),
>>     [per-lcore](@ref rte_per_lcore.h),
>>     [service cores](@ref rte_service.h),
>>     [keepalive](@ref rte_keepalive.h),
>> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
>> new file mode 100644
>> index 0000000000..dfd11cbd0b
>> --- /dev/null
>> +++ b/lib/eal/common/eal_common_lcore_var.c
>> @@ -0,0 +1,82 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#include <inttypes.h>
>> +
>> +#include <rte_common.h>
>> +#include <rte_debug.h>
>> +#include <rte_log.h>
>> +
>> +#include <rte_lcore_var.h>
>> +
>> +#include "eal_private.h"
>> +
>> +#define WARN_THRESHOLD 75
>> +
>> +/*
>> + * Avoid using offset zero, since it would result in a NULL-value
>> + * "handle" (offset) pointer, which in principle and per the API
>> + * definition shouldn't be an issue, but may confuse some tools and
>> + * users.
>> + */
>> +#define INITIAL_OFFSET 1
>> +
>> +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
>> +
> 
> While I like the idea of improved handling for per-core variables, my main
> concern with this set is this definition here, which adds yet another
> dependency on the compile-time defined RTE_MAX_LCORE value.
> 

lcore variables replaces one RTE_MAX_LCORE-dependent pattern with another.

You could even argue the dependency on RTE_MAX_LCORE is reduced with 
lcore variables, if you look at where/in how many places in the code 
base this macro is being used. Centralizing per-lcore data management 
may also provide some opportunity in the future for extending the API to 
cope with some more dynamic RTE_MAX_LCORE variant. Not without ABI 
breakage of course, but we are not ever going to change anything related 
to RTE_MAX_LCORE without breaking the ABI, since this constant is 
everywhere, including compiled into the application itself.

> I believe we already have an issue with this #define where it's impossible
> to come up with a single value that works for all, or nearly all cases. The
> current default is still 128, yet DPDK needs to support systems where the
> number of cores is well into the hundreds, requiring workarounds of core
> mappings or customized builds of DPDK. Upping the value fixes those issues
> at the cost to memory footprint explosion for smaller systems.
> 

I agree this is an issue.

RTE_MAX_LCORE also need to be sized to accommodate not only all cores 
used, but the sum of all EAL threads and registered non-EAL threads.

So, there is no reliable way to discover what RTE_MAX_LCORE is on a 
particular piece of hardware, since the actual number of lcore ids 
needed is up to the application.

Why is the default set so low? Linux has MAX_CPUS, which serves the same 
purpose, which is set to 4096 by default, if I recall correctly. 
Shouldn't we at least be able to increase it to 256?

> I'm therefore nervous about putting more dependencies on this value, when I
> feel we should be moving away from its use, to allow more runtime
> configurability of cores.
> 

What more specifically do you have in mind?

Maybe I'm overly pessimistic, but supporting lcores without any upper 
bound and also allowing them to be added and removed at any point during 
run time seems far-fetched, given where DPDK is today.

To include an actual upper bound, set during DPDK run-time 
initialization, lower than RTE_MAX_LCORE, seems easier. I think there is 
some equivalent in the Linux kernel. Again, you would need to 
accommodate for future rte_register_thread() calls.

<rte_lcore_var.h> could be extended with a user-specified lcore variable 
  init/free function callbacks, to allow lazy or late initialization.

If one could have a way to retrieve the max possible lcore ids *for a 
particular DPDK process* (as opposed to a particular build) it would be 
possible to avoid touching the per-lcore buffers for lcore ids that 
would never be used. With data in BSS, it would never be mapped/allocated.

An issue with BSS data is that there might be very RT-sensitive 
applications deciding to lock all memory into RAM, to avoid latency 
jitter caused by paging, and such would suffer from a large 
rte_lcore_var (or all the current static arrays). Lcore variables makes 
this worse, since rte_lcore_var is larger than the sum of today's static 
arrays, and must be so, with some margin, since there is no way to 
figure out ahead of time how much memory is actually going to be needed.

> For this set/feature, would it be possible to have a run-time allocated
> (and sized) array rather than using the RTE_MAX_LCORE value?
> 

What I explored was having the per-lcore buffers dynamically allocated. 
What I ran into was I saw no apparent benefit, and with dynamic 
allocation there were new problems to solve. One was to assure lcore 
variable buffers were allocated before they were being used. In 
particular if you want to use huge page memory, lcore variables may be 
available only when that machinery is ready to accept requests.

Also, with huge page memory, you won't get the benefit you will get from 
depend paging and BSS (i.e., only used memory is actually allocated).

With malloc(), I believe you generally do get that same benefit, if you 
allocation is sufficiently large.

I also considered just allocating chunks, fitting (say) 64 kB worth of 
lcore variables in each. Turned out more complex, and to no benefit, 
other than reducing footprint for mlockall() type apps, which seemed 
like corner case.

I never considered no upper-bound, dynamic, RTE_MAX_LCORE.

> Thanks, (and apologies for the mini-rant!)
> 
> /Bruce

Thanks for the comments. This is was no way near a rant.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20 10:47             ` Mattias Rönnblom
@ 2024-02-20 11:39               ` Bruce Richardson
  2024-02-20 13:37                 ` Morten Brørup
  2024-02-20 16:26                 ` Mattias Rönnblom
  0 siblings, 2 replies; 185+ messages in thread
From: Bruce Richardson @ 2024-02-20 11:39 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Mattias Rönnblom, dev, Morten Brørup, Stephen Hemminger

On Tue, Feb 20, 2024 at 11:47:14AM +0100, Mattias Rönnblom wrote:
> On 2024-02-20 10:11, Bruce Richardson wrote:
> > On Tue, Feb 20, 2024 at 09:49:03AM +0100, Mattias Rönnblom wrote:
> > > Introduce DPDK per-lcore id variables, or lcore variables for short.
> > > 
> > > An lcore variable has one value for every current and future lcore
> > > id-equipped thread.
> > > 
> > > The primary <rte_lcore_var.h> use case is for statically allocating
> > > small chunks of often-used data, which is related logically, but where
> > > there are performance benefits to reap from having updates being local
> > > to an lcore.
> > > 
> > > Lcore variables are similar to thread-local storage (TLS, e.g., C11
> > > _Thread_local), but decoupling the values' life time with that of the
> > > threads.

<snip>

> > > +/*
> > > + * Avoid using offset zero, since it would result in a NULL-value
> > > + * "handle" (offset) pointer, which in principle and per the API
> > > + * definition shouldn't be an issue, but may confuse some tools and
> > > + * users.
> > > + */
> > > +#define INITIAL_OFFSET 1
> > > +
> > > +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
> > > +
> > 
> > While I like the idea of improved handling for per-core variables, my main
> > concern with this set is this definition here, which adds yet another
> > dependency on the compile-time defined RTE_MAX_LCORE value.
> > 
> 
> lcore variables replaces one RTE_MAX_LCORE-dependent pattern with another.
> 
> You could even argue the dependency on RTE_MAX_LCORE is reduced with lcore
> variables, if you look at where/in how many places in the code base this
> macro is being used. Centralizing per-lcore data management may also provide
> some opportunity in the future for extending the API to cope with some more
> dynamic RTE_MAX_LCORE variant. Not without ABI breakage of course, but we
> are not ever going to change anything related to RTE_MAX_LCORE without
> breaking the ABI, since this constant is everywhere, including compiled into
> the application itself.
> 

Yep, that is true if it's widely used.

> > I believe we already have an issue with this #define where it's impossible
> > to come up with a single value that works for all, or nearly all cases. The
> > current default is still 128, yet DPDK needs to support systems where the
> > number of cores is well into the hundreds, requiring workarounds of core
> > mappings or customized builds of DPDK. Upping the value fixes those issues
> > at the cost to memory footprint explosion for smaller systems.
> > 
> 
> I agree this is an issue.
> 
> RTE_MAX_LCORE also need to be sized to accommodate not only all cores used,
> but the sum of all EAL threads and registered non-EAL threads.
> 
> So, there is no reliable way to discover what RTE_MAX_LCORE is on a
> particular piece of hardware, since the actual number of lcore ids needed is
> up to the application.
> 
> Why is the default set so low? Linux has MAX_CPUS, which serves the same
> purpose, which is set to 4096 by default, if I recall correctly. Shouldn't
> we at least be able to increase it to 256?

The default is so low because of the mempool caches. These are an array of
buffer pointers with 512 (IIRC) entries per core up to RTE_MAX_LCORE.

> 
> > I'm therefore nervous about putting more dependencies on this value, when I
> > feel we should be moving away from its use, to allow more runtime
> > configurability of cores.
> > 
> 
> What more specifically do you have in mind?
> 

I don't think having a dynamically scaling RTE_MAX_LCORE is feasible, but
what I would like to see is a runtime specified value. For example, you
could run DPDK with EAL parameter "--max-lcores=1024" for large systems or
"--max-lcores=32" for small ones. That would then be used at init-time to
scale all internal datastructures appropriately.

/Bruce

<snip for brevity>

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20 11:39               ` Bruce Richardson
@ 2024-02-20 13:37                 ` Morten Brørup
  2024-02-20 16:26                 ` Mattias Rönnblom
  1 sibling, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-02-20 13:37 UTC (permalink / raw)
  To: Bruce Richardson, Mattias Rönnblom
  Cc: Mattias Rönnblom, dev, Stephen Hemminger

> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Tuesday, 20 February 2024 12.39
> 
> On Tue, Feb 20, 2024 at 11:47:14AM +0100, Mattias Rönnblom wrote:
> > On 2024-02-20 10:11, Bruce Richardson wrote:
> > > On Tue, Feb 20, 2024 at 09:49:03AM +0100, Mattias Rönnblom wrote:
> > > > Introduce DPDK per-lcore id variables, or lcore variables for
> short.
> > > >
> > > > An lcore variable has one value for every current and future
> lcore
> > > > id-equipped thread.
> > > >
> > > > The primary <rte_lcore_var.h> use case is for statically
> allocating
> > > > small chunks of often-used data, which is related logically, but
> where
> > > > there are performance benefits to reap from having updates being
> local
> > > > to an lcore.
> > > >
> > > > Lcore variables are similar to thread-local storage (TLS, e.g.,
> C11
> > > > _Thread_local), but decoupling the values' life time with that of
> the
> > > > threads.
> 
> <snip>
> 
> > > > +/*
> > > > + * Avoid using offset zero, since it would result in a NULL-
> value
> > > > + * "handle" (offset) pointer, which in principle and per the API
> > > > + * definition shouldn't be an issue, but may confuse some tools
> and
> > > > + * users.
> > > > + */
> > > > +#define INITIAL_OFFSET 1
> > > > +
> > > > +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR]
> __rte_cache_aligned;
> > > > +
> > >
> > > While I like the idea of improved handling for per-core variables,
> my main
> > > concern with this set is this definition here, which adds yet
> another
> > > dependency on the compile-time defined RTE_MAX_LCORE value.
> > >
> >
> > lcore variables replaces one RTE_MAX_LCORE-dependent pattern with
> another.
> >
> > You could even argue the dependency on RTE_MAX_LCORE is reduced with
> lcore
> > variables, if you look at where/in how many places in the code base
> this
> > macro is being used. Centralizing per-lcore data management may also
> provide
> > some opportunity in the future for extending the API to cope with
> some more
> > dynamic RTE_MAX_LCORE variant. Not without ABI breakage of course,
> but we
> > are not ever going to change anything related to RTE_MAX_LCORE
> without
> > breaking the ABI, since this constant is everywhere, including
> compiled into
> > the application itself.
> >
> 
> Yep, that is true if it's widely used.
> 
> > > I believe we already have an issue with this #define where it's
> impossible
> > > to come up with a single value that works for all, or nearly all
> cases. The
> > > current default is still 128, yet DPDK needs to support systems
> where the
> > > number of cores is well into the hundreds, requiring workarounds of
> core
> > > mappings or customized builds of DPDK. Upping the value fixes those
> issues
> > > at the cost to memory footprint explosion for smaller systems.
> > >
> >
> > I agree this is an issue.
> >
> > RTE_MAX_LCORE also need to be sized to accommodate not only all cores
> used,
> > but the sum of all EAL threads and registered non-EAL threads.
> >
> > So, there is no reliable way to discover what RTE_MAX_LCORE is on a
> > particular piece of hardware, since the actual number of lcore ids
> needed is
> > up to the application.
> >
> > Why is the default set so low? Linux has MAX_CPUS, which serves the
> same
> > purpose, which is set to 4096 by default, if I recall correctly.
> Shouldn't
> > we at least be able to increase it to 256?

I recall a recent techboard meeting where the default was discussed. The default was agreed so low because it suffices for the vast majority of hardware out there, and applications for bigger platforms can be expected to build DPDK with a different configuration themselves. And as Bruce also mentions, it's a tradeoff for memory consumption.

> 
> The default is so low because of the mempool caches. These are an array
> of
> buffer pointers with 512 (IIRC) entries per core up to RTE_MAX_LCORE.

The decision was based on a need to make a quick decision, so we used narrow guesstimates, not a broader memory consumption analysis.

If we really cared about default memory consumption, we should reduce the default RTE_MAX_QUEUES_PER_PORT from 1024 too. It has quite an effect.

Having hard data about which build time configuration parameters have the biggest effect on memory consumption would be extremely useful for tweaking the parameters for resource limited hardware.
It's a mix of static and dynamic allocation, so it's not obvious which scalable data structures consume the most memory.

> 
> >
> > > I'm therefore nervous about putting more dependencies on this
> value, when I
> > > feel we should be moving away from its use, to allow more runtime
> > > configurability of cores.
> > >
> >
> > What more specifically do you have in mind?
> >
> 
> I don't think having a dynamically scaling RTE_MAX_LCORE is feasible,
> but
> what I would like to see is a runtime specified value. For example, you
> could run DPDK with EAL parameter "--max-lcores=1024" for large systems
> or
> "--max-lcores=32" for small ones. That would then be used at init-time
> to
> scale all internal datastructures appropriately.
> 

I agree 100 % that a better long term solution should be on the general road map.
Memory is a precious resource, but few seem to care about it.

A mix could provide an easy migration path:
Having RTE_MAX_LCORE as the hard upper limit (and default value) for a runtime specified max number ("rte_max_lcores").
With this, the goal would be for modules with very small data sets to continue using RTE_MAX_LCORE fixed size arrays, and for modules with larger data sets to migrate to rte_max_lcores dynamically sized arrays.

I am opposed to blocking a new patch series, only because it adds another RTE_MAX_LCORE sized array. We already have plenty of those.
It can be migrated towards dynamically sized array at a later time, just like the other modules with RTE_MAX_LCORE sized arrays.
Perhaps "fixing" an existing module would free up more memory than fixing this module. Let's spend development resources where they have the biggest impact.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v3 3/6] random: keep PRNG state in lcore variable
  2024-02-20  8:49         ` [RFC v3 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-02-20 15:31           ` Morten Brørup
  0 siblings, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-02-20 15:31 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Stephen Hemminger

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Tuesday, 20 February 2024 09.49
> 

[...]

> @@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
> 
>  	idx = rte_lcore_id();
> 
> -	/* last instance reserved for unregistered non-EAL threads */
>  	if (unlikely(idx == LCORE_ID_ANY))

idx is now only used here, so you could get rid of it by comparing directly to rte_lcore_id() instead.

Minor detail only; don't spin the patch for it.

> -		idx = RTE_MAX_LCORE;
> +		return &unregistered_rand_state;
> 
> -	return &rand_states[idx];
> +	return RTE_LCORE_VAR_PTR(rand_state);
>  }


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20 11:39               ` Bruce Richardson
  2024-02-20 13:37                 ` Morten Brørup
@ 2024-02-20 16:26                 ` Mattias Rönnblom
  1 sibling, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-20 16:26 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Mattias Rönnblom, dev, Morten Brørup, Stephen Hemminger

On 2024-02-20 12:39, Bruce Richardson wrote:
> On Tue, Feb 20, 2024 at 11:47:14AM +0100, Mattias Rönnblom wrote:
>> On 2024-02-20 10:11, Bruce Richardson wrote:
>>> On Tue, Feb 20, 2024 at 09:49:03AM +0100, Mattias Rönnblom wrote:
>>>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>>>
>>>> An lcore variable has one value for every current and future lcore
>>>> id-equipped thread.
>>>>
>>>> The primary <rte_lcore_var.h> use case is for statically allocating
>>>> small chunks of often-used data, which is related logically, but where
>>>> there are performance benefits to reap from having updates being local
>>>> to an lcore.
>>>>
>>>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>>>> _Thread_local), but decoupling the values' life time with that of the
>>>> threads.
> 
> <snip>
> 
>>>> +/*
>>>> + * Avoid using offset zero, since it would result in a NULL-value
>>>> + * "handle" (offset) pointer, which in principle and per the API
>>>> + * definition shouldn't be an issue, but may confuse some tools and
>>>> + * users.
>>>> + */
>>>> +#define INITIAL_OFFSET 1
>>>> +
>>>> +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
>>>> +
>>>
>>> While I like the idea of improved handling for per-core variables, my main
>>> concern with this set is this definition here, which adds yet another
>>> dependency on the compile-time defined RTE_MAX_LCORE value.
>>>
>>
>> lcore variables replaces one RTE_MAX_LCORE-dependent pattern with another.
>>
>> You could even argue the dependency on RTE_MAX_LCORE is reduced with lcore
>> variables, if you look at where/in how many places in the code base this
>> macro is being used. Centralizing per-lcore data management may also provide
>> some opportunity in the future for extending the API to cope with some more
>> dynamic RTE_MAX_LCORE variant. Not without ABI breakage of course, but we
>> are not ever going to change anything related to RTE_MAX_LCORE without
>> breaking the ABI, since this constant is everywhere, including compiled into
>> the application itself.
>>
> 
> Yep, that is true if it's widely used.
> 
>>> I believe we already have an issue with this #define where it's impossible
>>> to come up with a single value that works for all, or nearly all cases. The
>>> current default is still 128, yet DPDK needs to support systems where the
>>> number of cores is well into the hundreds, requiring workarounds of core
>>> mappings or customized builds of DPDK. Upping the value fixes those issues
>>> at the cost to memory footprint explosion for smaller systems.
>>>
>>
>> I agree this is an issue.
>>
>> RTE_MAX_LCORE also need to be sized to accommodate not only all cores used,
>> but the sum of all EAL threads and registered non-EAL threads.
>>
>> So, there is no reliable way to discover what RTE_MAX_LCORE is on a
>> particular piece of hardware, since the actual number of lcore ids needed is
>> up to the application.
>>
>> Why is the default set so low? Linux has MAX_CPUS, which serves the same
>> purpose, which is set to 4096 by default, if I recall correctly. Shouldn't
>> we at least be able to increase it to 256?
> 
> The default is so low because of the mempool caches. These are an array of
> buffer pointers with 512 (IIRC) entries per core up to RTE_MAX_LCORE.
> 
>>
>>> I'm therefore nervous about putting more dependencies on this value, when I
>>> feel we should be moving away from its use, to allow more runtime
>>> configurability of cores.
>>>
>>
>> What more specifically do you have in mind?
>>
> 
> I don't think having a dynamically scaling RTE_MAX_LCORE is feasible, but
> what I would like to see is a runtime specified value. For example, you
> could run DPDK with EAL parameter "--max-lcores=1024" for large systems or
> "--max-lcores=32" for small ones. That would then be used at init-time to
> scale all internal datastructures appropriately.
> 

Sounds reasonably to me, especially if you would take gradual approach.

By gradual I mean something like adding a function 
rte_lcore_max_possible(), or something like that, returning the EAL 
init-specified value. DPDK libraries/PMDs could then gradually be made 
aware and taking advantage of knowing that lcore ids will always be 
below a certain threshold, usually significantly lower than RTE_MAX_LCORE.

The only change required for lcore variables would be that the FOREACH 
macro would use the run-time-max value, rather than RTE_MAX_LCORE, which 
in turn would leave all the higher-numbered lcore id buffers 
untouched/unmapped.

The set of possible lcore ids could also be expressed as a bitset, if 
you have machine with a huge amount of cores, running many small DPDK 
instances.

> /Bruce
> 
> <snip for brevity>

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-20  9:11           ` Bruce Richardson
@ 2024-02-21  9:43           ` Jerin Jacob
  2024-02-21 10:31             ` Morten Brørup
  2024-02-21 14:26             ` Mattias Rönnblom
  2024-02-22  9:22           ` Morten Brørup
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
  3 siblings, 2 replies; 185+ messages in thread
From: Jerin Jacob @ 2024-02-21  9:43 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, hofors, Morten Brørup, Stephen Hemminger, Tomasz Duszynski

On Tue, Feb 20, 2024 at 2:35 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Introduce DPDK per-lcore id variables, or lcore variables for short.
>
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
>
> The primary <rte_lcore_var.h> use case is for statically allocating
> small chunks of often-used data, which is related logically, but where
> there are performance benefits to reap from having updates being local
> to an lcore.

I think, in order to quantify the gain, we must add a performance test
case to measure the acces cycles with lcore variables scheme vs this
scheme.
Other PMU counters(Cache misses) may be interesting but we dont have
means in DPDK to do self monitoring now like
https://patches.dpdk.org/project/dpdk/patch/20221213104350.3218167-1-tduszynski@marvell.com/

>
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
>
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
>
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
>
> RFC v3:
>  * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>  * Update example to reflect FOREACH macro name change (in RFC v2).
>
> RFC v2:
>  * Use alignof to derive alignment requirements. (Morten Brørup)
>  * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>    *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>  * Allow user-specified alignment, but limit max to cache line size.
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---
>  config/rte_config.h                   |   1 +
>  doc/api/doxy-api-index.md             |   1 +
>  lib/eal/common/eal_common_lcore_var.c |  82 ++++++
>  lib/eal/common/meson.build            |   1 +
>  lib/eal/include/meson.build           |   1 +
>  lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
>  lib/eal/version.map                   |   4 +
>  7 files changed, 465 insertions(+)
>  create mode 100644 lib/eal/common/eal_common_lcore_var.c
>  create mode 100644 lib/eal/include/rte_lcore_var.h
>
> diff --git a/config/rte_config.h b/config/rte_config.h
> index da265d7dd2..884482e473 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -30,6 +30,7 @@
>  /* EAL defines */
>  #define RTE_CACHE_GUARD_LINES 1
>  #define RTE_MAX_HEAPS 32
> +#define RTE_MAX_LCORE_VAR 1048576
>  #define RTE_MAX_MEMSEG_LISTS 128
>  #define RTE_MAX_MEMSEG_PER_LIST 8192
>  #define RTE_MAX_MEM_MB_PER_LIST 32768
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index a6a768bd7c..bb06bb7ca1 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -98,6 +98,7 @@ The public API headers are grouped by topics:
>    [interrupts](@ref rte_interrupts.h),
>    [launch](@ref rte_launch.h),
>    [lcore](@ref rte_lcore.h),
> +  [lcore-varible](@ref rte_lcore_var.h),
>    [per-lcore](@ref rte_per_lcore.h),
>    [service cores](@ref rte_service.h),
>    [keepalive](@ref rte_keepalive.h),
> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
> new file mode 100644
> index 0000000000..dfd11cbd0b
> --- /dev/null
> +++ b/lib/eal/common/eal_common_lcore_var.c
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#include <inttypes.h>
> +
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +
> +#include <rte_lcore_var.h>
> +
> +#include "eal_private.h"
> +
> +#define WARN_THRESHOLD 75
> +
> +/*
> + * Avoid using offset zero, since it would result in a NULL-value
> + * "handle" (offset) pointer, which in principle and per the API
> + * definition shouldn't be an issue, but may confuse some tools and
> + * users.
> + */
> +#define INITIAL_OFFSET 1
> +
> +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
> +
> +static uintptr_t allocated = INITIAL_OFFSET;
> +
> +static void
> +verify_allocation(uintptr_t new_allocated)
> +{
> +       static bool has_warned;
> +
> +       RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
> +
> +       if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
> +           !has_warned) {
> +               EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
> +                       "of the maximum capacity (%d bytes)", WARN_THRESHOLD,
> +                       RTE_MAX_LCORE_VAR);
> +               has_warned = true;
> +       }
> +}
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +       uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, align);
> +
> +       void *offset = (void *)new_allocated;
> +
> +       new_allocated += size;
> +
> +       verify_allocation(new_allocated);
> +
> +       allocated = new_allocated;
> +
> +       EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> +               "%"PRIuPTR"-byte alignment", size, align);
> +
> +       return offset;
> +}
> +
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align)
> +{
> +       /* Having the per-lcore buffer size aligned on cache lines
> +        * assures as well as having the base pointer aligned on cache
> +        * size assures that aligned offsets also translate to aligned
> +        * pointers across all values.
> +        */
> +       RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> +       RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> +
> +       /* '0' means asking for worst-case alignment requirements */
> +       if (align == 0)
> +               align = alignof(max_align_t);
> +
> +       RTE_ASSERT(rte_is_power_of_2(align));
> +
> +       return lcore_var_alloc(size, align);
> +}
> diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
> index 22a626ba6f..d41403680b 100644
> --- a/lib/eal/common/meson.build
> +++ b/lib/eal/common/meson.build
> @@ -18,6 +18,7 @@ sources += files(
>          'eal_common_interrupts.c',
>          'eal_common_launch.c',
>          'eal_common_lcore.c',
> +        'eal_common_lcore_var.c',
>          'eal_common_mcfg.c',
>          'eal_common_memalloc.c',
>          'eal_common_memory.c',
> diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
> index e94b056d46..9449253e23 100644
> --- a/lib/eal/include/meson.build
> +++ b/lib/eal/include/meson.build
> @@ -27,6 +27,7 @@ headers += files(
>          'rte_keepalive.h',
>          'rte_launch.h',
>          'rte_lcore.h',
> +        'rte_lcore_var.h',
>          'rte_lock_annotations.h',
>          'rte_malloc.h',
>          'rte_mcslock.h',
> diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
> new file mode 100644
> index 0000000000..da49d48d7c
> --- /dev/null
> +++ b/lib/eal/include/rte_lcore_var.h
> @@ -0,0 +1,375 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#ifndef _RTE_LCORE_VAR_H_
> +#define _RTE_LCORE_VAR_H_
> +
> +/**
> + * @file
> + *
> + * RTE Per-lcore id variables
> + *
> + * This API provides a mechanism to create and access per-lcore id
> + * variables in a space- and cycle-efficient manner.
> + *
> + * A per-lcore id variable (or lcore variable for short) has one value
> + * for each EAL thread and registered non-EAL thread. In other words,
> + * there's one copy of its value for each and every current and future
> + * lcore id-equipped thread, with the total number of copies amounting
> + * to \c RTE_MAX_LCORE.
> + *
> + * In order to access the values of an lcore variable, a handle is
> + * used. The type of the handle is a pointer to the value's type
> + * (e.g., for \c uint32_t lcore variable, the handle is a
> + * <code>uint32_t *</code>. A handle may be passed between modules and
> + * threads just like any pointer, but its value is not the address of
> + * any particular object, but rather just an opaque identifier, stored
> + * in a typed pointer (to inform the access macro the type of values).
> + *
> + * @b Creation
> + *
> + * An lcore variable is created in two steps:
> + *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
> + *  2. Allocate lcore variable storage and initialize the handle with
> + *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
> + *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
> + *     module initialization, but may be done at any time.
> + *
> + * An lcore variable is not tied to the owning thread's lifetime. It's
> + * available for use by any thread immediately after having been
> + * allocated, and continues to be available throughout the lifetime of
> + * the EAL.
> + *
> + * Lcore variables cannot and need not be freed.
> + *
> + * @b Access
> + *
> + * The value of any lcore variable for any lcore id may be accessed
> + * from any thread (including unregistered threads), but is should
> + * generally only *frequently* read from or written to by the owner.
> + *
> + * Values of the same lcore variable but owned by to different lcore
> + * ids *may* be frequently read or written by the owners without the
> + * risk of false sharing.
> + *
> + * An appropriate synchronization mechanism (e.g., atomics) should
> + * employed to assure there are no data races between the owning
> + * thread and any non-owner threads accessing the same lcore variable
> + * instance.
> + *
> + * The value of the lcore variable for a particular lcore id may be
> + * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
> + * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
> + *
> + * To modify the value of an lcore variable for a particular lcore id,
> + * either access the object through the pointer retrieved by \ref
> + * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
> + * RTE_LCORE_VAR_LCORE_SET.
> + *
> + * The access macros each has a short-hand which may be used by an EAL
> + * thread or registered non-EAL thread to access the lcore variable
> + * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
> + * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
> + *
> + * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
> + * pointer with the same type as the value, it may not be directly
> + * dereferenced and must be treated as an opaque identifier. The
> + * *identifier* value is common across all lcore ids.
> + *
> + * @b Storage
> + *
> + * An lcore variable's values may by of a primitive type like \c int,
> + * but would more typically be a \c struct. An application may choose
> + * to define an lcore variable, which it then it goes on to never
> + * allocate.
> + *
> + * The lcore variable handle introduces a per-variable (not
> + * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
> + * there are some memory footprint gains to be made by organizing all
> + * per-lcore id data for a particular module as one lcore variable
> + * (e.g., as a struct).
> + *
> + * The sum of all lcore variables, plus any padding required, must be
> + * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
> + * violation of this maximum results in the process being terminated.
> + *
> + * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
> + * same order of magnitude in size as a thread stack.
> + *
> + * The lcore variable storage buffers are kept in the BSS section in
> + * the resulting binary, where data generally isn't mapped in until
> + * it's accessed. This means that unused portions of the lcore
> + * variable storage area will not occupy any physical memory (with a
> + * granularity of the memory page size [usually 4 kB]).
> + *
> + * Lcore variables should generally *not* be \ref __rte_cache_aligned
> + * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
> + * of these constructs are designed to avoid false sharing. In the
> + * case of an lcore variable instance, all nearby data structures
> + * should almost-always be written to by a single thread (the lcore
> + * variable owner). Adding padding will increase the effective memory
> + * working set size, and potentially reducing performance.
> + *
> + * @b Example
> + *
> + * Below is an example of the use of an lcore variable:
> + *
> + * \code{.c}
> + * struct foo_lcore_state {
> + *         int a;
> + *         long b;
> + * };
> + *
> + * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
> + *
> + * long foo_get_a_plus_b(void)
> + * {
> + *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
> + *
> + *         return state->a + state->b;
> + * }
> + *
> + * RTE_INIT(rte_foo_init)
> + * {
> + *         unsigned int lcore_id;
> + *
> + *         RTE_LCORE_VAR_ALLOC(foo_state);
> + *
> + *         struct foo_lcore_state *state;
> + *         RTE_LCORE_VAR_FOREACH_VALUE(lcore_states) {
> + *                 (initialize 'state')
> + *         }
> + *
> + *         (other initialization)
> + * }
> + * \endcode
> + *
> + *
> + * @b Alternatives
> + *
> + * Lcore variables are designed to replace a pattern exemplified below:
> + * \code{.c}
> + * struct foo_lcore_state {
> + *         int a;
> + *         long b;
> + *         RTE_CACHE_GUARD;
> + * } __rte_cache_aligned;
> + *
> + * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
> + * \endcode
> + *
> + * This scheme is simple and effective, but has one drawback: the data
> + * is organized so that objects related to all lcores for a particular
> + * module is kept close in memory. At a bare minimum, this forces the
> + * use of cache-line alignment to avoid false sharing. With CPU
> + * hardware prefetching and memory loads resulting from speculative
> + * execution (functions which seemingly are getting more eager faster
> + * than they are getting more intelligent), one or more "guard" cache
> + * lines may be required to separate one lcore's data from another's.
> + *
> + * Lcore variables has the upside of working with, not against, the
> + * CPU's assumptions and for example next-line prefetchers may well
> + * work the way its designers intended (i.e., to the benefit, not
> + * detriment, of system performance).
> + *
> + * Another alternative to \ref rte_lcore_var.h is the \ref
> + * rte_per_lcore.h API, which make use of thread-local storage (TLS,
> + * e.g., GCC __thread or C11 _Thread_local). The main differences
> + * between by using the various forms of TLS (e.g., \ref
> + * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
> + * variables are:
> + *
> + *   * The existence and non-existence of a thread-local variable
> + *     instance follow that of particular thread's. The data cannot be
> + *     accessed before the thread has been created, nor after it has
> + *     exited. One effect of this is thread-local variables must
> + *     initialized in a "lazy" manner (e.g., at the point of thread
> + *     creation). Lcore variables may be accessed immediately after
> + *     having been allocated (which is usually prior any thread beyond
> + *     the main thread is running).
> + *   * A thread-local variable is duplicated across all threads in the
> + *     process, including unregistered non-EAL threads (i.e.,
> + *     "regular" threads). For DPDK applications heavily relying on
> + *     multi-threading (in conjunction to DPDK's "one thread per core"
> + *     pattern), either by having many concurrent threads or
> + *     creating/destroying threads at a high rate, an excessive use of
> + *     thread-local variables may cause inefficiencies (e.g.,
> + *     increased thread creation overhead due to thread-local storage
> + *     initialization or increased total RAM footprint usage). Lcore
> + *     variables *only* exist for threads with an lcore id, and thus
> + *     not for such "regular" threads.
> + *   * If data in thread-local storage may be shared between threads
> + *     (i.e., can a pointer to a thread-local variable be passed to
> + *     and successfully dereferenced by non-owning thread) depends on
> + *     the details of the TLS implementation. With GCC __thread and
> + *     GCC _Thread_local, such data sharing is supported. In the C11
> + *     standard, the result of accessing another thread's
> + *     _Thread_local object is implementation-defined. Lcore variable
> + *     instances may be accessed reliably by any thread.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stddef.h>
> +#include <stdalign.h>
> +
> +#include <rte_common.h>
> +#include <rte_config.h>
> +#include <rte_lcore.h>
> +
> +/**
> + * Given the lcore variable type, produces the type of the lcore
> + * variable handle.
> + */
> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)                \
> +       type *
> +
> +/**
> + * Define a lcore variable handle.
> + *
> + * This macro defines a variable which is used as a handle to access
> + * the various per-lcore id instances of a per-lcore id variable.
> + *
> + * The aim with this macro is to make clear at the point of
> + * declaration that this is an lcore handler, rather than a regular
> + * pointer.
> + *
> + * Add @b static as a prefix in case the lcore variable are only to be
> + * accessed from a particular translation unit.
> + */
> +#define RTE_LCORE_VAR_HANDLE(type, name)       \
> +       RTE_LCORE_VAR_HANDLE_TYPE(type) name
> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle.
> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align)      \
> +       name = rte_lcore_var_alloc(size, align)
> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle,
> + * with values aligned for any type of object.
> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)   \
> +       name = rte_lcore_var_alloc(size, 0)
> +
> +/**
> + * Allocate space for an lcore variable of the size and alignment requirements
> + * suggested by the handler pointer type, and initialize its handle.
> + */
> +#define RTE_LCORE_VAR_ALLOC(name)                                      \
> +       RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, sizeof(*(name)),           \
> +                                      alignof(typeof(*(name))))
> +
> +/**
> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
> + * means of a \ref RTE_INIT constructor.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)               \
> +       RTE_INIT(rte_lcore_var_init_ ## name)                           \
> +       {                                                               \
> +               RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);      \
> +       }
> +
> +/**
> + * Allocate an explicitly-sized lcore variable by means of a \ref
> + * RTE_INIT constructor.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)            \
> +       RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
> +
> +/**
> + * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
> + */
> +#define RTE_LCORE_VAR_INIT(name)                                       \
> +       RTE_INIT(rte_lcore_var_init_ ## name)                           \
> +       {                                                               \
> +               RTE_LCORE_VAR_ALLOC(name);                              \
> +       }
> +
> +#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)              \
> +       ((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))
> +
> +/**
> + * Get pointer to lcore variable instance with the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)                                \
> +       ((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> +
> +/**
> + * Get value of a lcore variable instance of the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)                \
> +       (*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))
> +
> +/**
> + * Set the value of a lcore variable instance of the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)         \
> +       (*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
> +
> +/**
> + * Get pointer to lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
> +
> +/**
> + * Get value of lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
> +
> +/**
> + * Set value of lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_SET(name, value) \
> +       RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
> +
> +/**
> + * Iterate over each lcore id's value for a lcore variable.
> + */
> +#define RTE_LCORE_VAR_FOREACH_VALUE(var, name)                         \
> +       for (unsigned int lcore_id =                                    \
> +                    (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);   \
> +            lcore_id < RTE_MAX_LCORE;                                  \
> +            lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> +
> +extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
> +
> +/**
> + * Allocate space in the per-lcore id buffers for a lcore variable.
> + *
> + * The pointer returned is only an opaque identifer of the variable. To
> + * get an actual pointer to a particular instance of the variable use
> + * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
> + *
> + * The allocation is always successful, barring a fatal exhaustion of
> + * the per-lcore id buffer space.
> + *
> + * @param size
> + *   The size (in bytes) of the variable's per-lcore id value.
> + * @param align
> + *   If 0, the values will be suitably aligned for any kind of type
> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
> + *   on a multiple of *align*, which must be a power of 2 and equal or
> + *   less than \c RTE_CACHE_LINE_SIZE.
> + * @return
> + *   The id of the variable, stored in a void pointer value.
> + */
> +__rte_experimental
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_LCORE_VAR_H_ */
> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index 5e0cd47c82..e90b86115a 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -393,6 +393,10 @@ EXPERIMENTAL {
>         # added in 23.07
>         rte_memzone_max_get;
>         rte_memzone_max_set;
> +
> +       # added in 24.03
> +       rte_lcore_var_alloc;
> +       rte_lcore_var;
>  };
>
>  INTERNAL {
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-21  9:43           ` Jerin Jacob
@ 2024-02-21 10:31             ` Morten Brørup
  2024-02-21 14:26             ` Mattias Rönnblom
  1 sibling, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-02-21 10:31 UTC (permalink / raw)
  To: Jerin Jacob, Mattias Rönnblom
  Cc: dev, hofors, Stephen Hemminger, Tomasz Duszynski

> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> Sent: Wednesday, 21 February 2024 10.44
> 
> On Tue, Feb 20, 2024 at 2:35 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
> >
> > Introduce DPDK per-lcore id variables, or lcore variables for short.
> >
> > An lcore variable has one value for every current and future lcore
> > id-equipped thread.
> >
> > The primary <rte_lcore_var.h> use case is for statically allocating
> > small chunks of often-used data, which is related logically, but
> where
> > there are performance benefits to reap from having updates being
> local
> > to an lcore.
> 
> I think, in order to quantify the gain, we must add a performance test
> case to measure the acces cycles with lcore variables scheme vs this
> scheme.
> Other PMU counters(Cache misses) may be interesting but we dont have
> means in DPDK to do self monitoring now like
> https://patches.dpdk.org/project/dpdk/patch/20221213104350.3218167-1-
> tduszynski@marvell.com/
> 
> >
> > Lcore variables are similar to thread-local storage (TLS, e.g., C11
> > _Thread_local), but decoupling the values' life time with that of the
> > threads.

Lcore variables can be accessed by other threads, unlike TLS variables.

If a TLS variable needs to be accessed by other threads, there must also be an RTE_MAX_LCORE-sized array of pointers to the TLS variable, where each worker thread must initialize the entry pointing to its TLS variable.

> >
> > Lcore variables are also similar in terms of functionality provided
> by
> > FreeBSD kernel's DPCPU_*() family of macros and the associated
> > build-time machinery. DPCPU uses linker scripts, which effectively
> > prevents the reuse of its, otherwise seemingly viable, approach.
> >
> > The currently-prevailing way to solve the same problem as lcore
> > variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> > array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> > lcore variables over this approach is that data related to the same
> > lcore now is close (spatially, in memory), rather than data used by
> > the same module, which in turn avoid excessive use of padding,
> > polluting caches with unused data.
> >

There are 3 ways to implement per-lcore variables:
1. Thread-local storage, available via RTE_DEFINE_PER_LCORE(type, name).
2. RTE_MAX_LCORE-sized arrays.
3. Lcore variables, as provided by this patch series.

Perhaps an overview of differences and performance numbers would help understand the benefits of this patch series.

The advantages of packing more variables into the same cache line may be hard to measure without PMU counters, and could perhaps be described or estimated instead.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-21  9:43           ` Jerin Jacob
  2024-02-21 10:31             ` Morten Brørup
@ 2024-02-21 14:26             ` Mattias Rönnblom
  1 sibling, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-21 14:26 UTC (permalink / raw)
  To: Jerin Jacob, Mattias Rönnblom
  Cc: dev, Morten Brørup, Stephen Hemminger, Tomasz Duszynski

On 2024-02-21 10:43, Jerin Jacob wrote:
> On Tue, Feb 20, 2024 at 2:35 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>>
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small chunks of often-used data, which is related logically, but where
>> there are performance benefits to reap from having updates being local
>> to an lcore.
> 
> I think, in order to quantify the gain, we must add a performance test
> case to measure the acces cycles with lcore variables scheme vs this
> scheme.

As I might have mentioned elsewhere in the thread, the micro benchmarks 
are already there, in the form of the service and random perf tests.

The service perf tests doesn't show any difference, and the rand perf 
tests seems to indicate lcore variables add one (1) core clock cycle per 
rte_rand() call (measured on Raptor Lake E- and P-cores).

The effects on a real-world app would be highly dependent on what DPDK 
services it's using that themselves are using static per-lcore data, and 
to what extent the app itself use per-lcore data.

Provided lcore variables performs as good as the cache-aligned static 
array pattern for micro benchmarks, lcore variables should always 
be-as-good-or-better in a real-world app, because the cache working set 
size will always be smaller (no padding).

That said, I don't think lcore variables will result in noticable 
performance gain for the typical app. If you do see large gains, I 
suspect it will be on systems with next-N-lines prefetchers and the 
lcore data weren't RTE_CACHE_GUARDed.

> Other PMU counters(Cache misses) may be interesting but we dont have
> means in DPDK to do self monitoring now like
> https://patches.dpdk.org/project/dpdk/patch/20221213104350.3218167-1-tduszynski@marvell.com/
> 
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> RFC v3:
>>   * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>>   * Update example to reflect FOREACH macro name change (in RFC v2).
>>
>> RFC v2:
>>   * Use alignof to derive alignment requirements. (Morten Brørup)
>>   * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>>     *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>>   * Allow user-specified alignment, but limit max to cache line size.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
>>   config/rte_config.h                   |   1 +
>>   doc/api/doxy-api-index.md             |   1 +
>>   lib/eal/common/eal_common_lcore_var.c |  82 ++++++
>>   lib/eal/common/meson.build            |   1 +
>>   lib/eal/include/meson.build           |   1 +
>>   lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
>>   lib/eal/version.map                   |   4 +
>>   7 files changed, 465 insertions(+)
>>   create mode 100644 lib/eal/common/eal_common_lcore_var.c
>>   create mode 100644 lib/eal/include/rte_lcore_var.h
>>
>> diff --git a/config/rte_config.h b/config/rte_config.h
>> index da265d7dd2..884482e473 100644
>> --- a/config/rte_config.h
>> +++ b/config/rte_config.h
>> @@ -30,6 +30,7 @@
>>   /* EAL defines */
>>   #define RTE_CACHE_GUARD_LINES 1
>>   #define RTE_MAX_HEAPS 32
>> +#define RTE_MAX_LCORE_VAR 1048576
>>   #define RTE_MAX_MEMSEG_LISTS 128
>>   #define RTE_MAX_MEMSEG_PER_LIST 8192
>>   #define RTE_MAX_MEM_MB_PER_LIST 32768
>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>> index a6a768bd7c..bb06bb7ca1 100644
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -98,6 +98,7 @@ The public API headers are grouped by topics:
>>     [interrupts](@ref rte_interrupts.h),
>>     [launch](@ref rte_launch.h),
>>     [lcore](@ref rte_lcore.h),
>> +  [lcore-varible](@ref rte_lcore_var.h),
>>     [per-lcore](@ref rte_per_lcore.h),
>>     [service cores](@ref rte_service.h),
>>     [keepalive](@ref rte_keepalive.h),
>> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
>> new file mode 100644
>> index 0000000000..dfd11cbd0b
>> --- /dev/null
>> +++ b/lib/eal/common/eal_common_lcore_var.c
>> @@ -0,0 +1,82 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#include <inttypes.h>
>> +
>> +#include <rte_common.h>
>> +#include <rte_debug.h>
>> +#include <rte_log.h>
>> +
>> +#include <rte_lcore_var.h>
>> +
>> +#include "eal_private.h"
>> +
>> +#define WARN_THRESHOLD 75
>> +
>> +/*
>> + * Avoid using offset zero, since it would result in a NULL-value
>> + * "handle" (offset) pointer, which in principle and per the API
>> + * definition shouldn't be an issue, but may confuse some tools and
>> + * users.
>> + */
>> +#define INITIAL_OFFSET 1
>> +
>> +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
>> +
>> +static uintptr_t allocated = INITIAL_OFFSET;
>> +
>> +static void
>> +verify_allocation(uintptr_t new_allocated)
>> +{
>> +       static bool has_warned;
>> +
>> +       RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
>> +
>> +       if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
>> +           !has_warned) {
>> +               EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
>> +                       "of the maximum capacity (%d bytes)", WARN_THRESHOLD,
>> +                       RTE_MAX_LCORE_VAR);
>> +               has_warned = true;
>> +       }
>> +}
>> +
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> +       uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, align);
>> +
>> +       void *offset = (void *)new_allocated;
>> +
>> +       new_allocated += size;
>> +
>> +       verify_allocation(new_allocated);
>> +
>> +       allocated = new_allocated;
>> +
>> +       EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>> +               "%"PRIuPTR"-byte alignment", size, align);
>> +
>> +       return offset;
>> +}
>> +
>> +void *
>> +rte_lcore_var_alloc(size_t size, size_t align)
>> +{
>> +       /* Having the per-lcore buffer size aligned on cache lines
>> +        * assures as well as having the base pointer aligned on cache
>> +        * size assures that aligned offsets also translate to aligned
>> +        * pointers across all values.
>> +        */
>> +       RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
>> +       RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
>> +
>> +       /* '0' means asking for worst-case alignment requirements */
>> +       if (align == 0)
>> +               align = alignof(max_align_t);
>> +
>> +       RTE_ASSERT(rte_is_power_of_2(align));
>> +
>> +       return lcore_var_alloc(size, align);
>> +}
>> diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
>> index 22a626ba6f..d41403680b 100644
>> --- a/lib/eal/common/meson.build
>> +++ b/lib/eal/common/meson.build
>> @@ -18,6 +18,7 @@ sources += files(
>>           'eal_common_interrupts.c',
>>           'eal_common_launch.c',
>>           'eal_common_lcore.c',
>> +        'eal_common_lcore_var.c',
>>           'eal_common_mcfg.c',
>>           'eal_common_memalloc.c',
>>           'eal_common_memory.c',
>> diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
>> index e94b056d46..9449253e23 100644
>> --- a/lib/eal/include/meson.build
>> +++ b/lib/eal/include/meson.build
>> @@ -27,6 +27,7 @@ headers += files(
>>           'rte_keepalive.h',
>>           'rte_launch.h',
>>           'rte_lcore.h',
>> +        'rte_lcore_var.h',
>>           'rte_lock_annotations.h',
>>           'rte_malloc.h',
>>           'rte_mcslock.h',
>> diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
>> new file mode 100644
>> index 0000000000..da49d48d7c
>> --- /dev/null
>> +++ b/lib/eal/include/rte_lcore_var.h
>> @@ -0,0 +1,375 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#ifndef _RTE_LCORE_VAR_H_
>> +#define _RTE_LCORE_VAR_H_
>> +
>> +/**
>> + * @file
>> + *
>> + * RTE Per-lcore id variables
>> + *
>> + * This API provides a mechanism to create and access per-lcore id
>> + * variables in a space- and cycle-efficient manner.
>> + *
>> + * A per-lcore id variable (or lcore variable for short) has one value
>> + * for each EAL thread and registered non-EAL thread. In other words,
>> + * there's one copy of its value for each and every current and future
>> + * lcore id-equipped thread, with the total number of copies amounting
>> + * to \c RTE_MAX_LCORE.
>> + *
>> + * In order to access the values of an lcore variable, a handle is
>> + * used. The type of the handle is a pointer to the value's type
>> + * (e.g., for \c uint32_t lcore variable, the handle is a
>> + * <code>uint32_t *</code>. A handle may be passed between modules and
>> + * threads just like any pointer, but its value is not the address of
>> + * any particular object, but rather just an opaque identifier, stored
>> + * in a typed pointer (to inform the access macro the type of values).
>> + *
>> + * @b Creation
>> + *
>> + * An lcore variable is created in two steps:
>> + *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
>> + *  2. Allocate lcore variable storage and initialize the handle with
>> + *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
>> + *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
>> + *     module initialization, but may be done at any time.
>> + *
>> + * An lcore variable is not tied to the owning thread's lifetime. It's
>> + * available for use by any thread immediately after having been
>> + * allocated, and continues to be available throughout the lifetime of
>> + * the EAL.
>> + *
>> + * Lcore variables cannot and need not be freed.
>> + *
>> + * @b Access
>> + *
>> + * The value of any lcore variable for any lcore id may be accessed
>> + * from any thread (including unregistered threads), but is should
>> + * generally only *frequently* read from or written to by the owner.
>> + *
>> + * Values of the same lcore variable but owned by to different lcore
>> + * ids *may* be frequently read or written by the owners without the
>> + * risk of false sharing.
>> + *
>> + * An appropriate synchronization mechanism (e.g., atomics) should
>> + * employed to assure there are no data races between the owning
>> + * thread and any non-owner threads accessing the same lcore variable
>> + * instance.
>> + *
>> + * The value of the lcore variable for a particular lcore id may be
>> + * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
>> + * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
>> + *
>> + * To modify the value of an lcore variable for a particular lcore id,
>> + * either access the object through the pointer retrieved by \ref
>> + * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
>> + * RTE_LCORE_VAR_LCORE_SET.
>> + *
>> + * The access macros each has a short-hand which may be used by an EAL
>> + * thread or registered non-EAL thread to access the lcore variable
>> + * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
>> + * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
>> + *
>> + * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
>> + * pointer with the same type as the value, it may not be directly
>> + * dereferenced and must be treated as an opaque identifier. The
>> + * *identifier* value is common across all lcore ids.
>> + *
>> + * @b Storage
>> + *
>> + * An lcore variable's values may by of a primitive type like \c int,
>> + * but would more typically be a \c struct. An application may choose
>> + * to define an lcore variable, which it then it goes on to never
>> + * allocate.
>> + *
>> + * The lcore variable handle introduces a per-variable (not
>> + * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
>> + * there are some memory footprint gains to be made by organizing all
>> + * per-lcore id data for a particular module as one lcore variable
>> + * (e.g., as a struct).
>> + *
>> + * The sum of all lcore variables, plus any padding required, must be
>> + * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
>> + * violation of this maximum results in the process being terminated.
>> + *
>> + * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
>> + * same order of magnitude in size as a thread stack.
>> + *
>> + * The lcore variable storage buffers are kept in the BSS section in
>> + * the resulting binary, where data generally isn't mapped in until
>> + * it's accessed. This means that unused portions of the lcore
>> + * variable storage area will not occupy any physical memory (with a
>> + * granularity of the memory page size [usually 4 kB]).
>> + *
>> + * Lcore variables should generally *not* be \ref __rte_cache_aligned
>> + * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
>> + * of these constructs are designed to avoid false sharing. In the
>> + * case of an lcore variable instance, all nearby data structures
>> + * should almost-always be written to by a single thread (the lcore
>> + * variable owner). Adding padding will increase the effective memory
>> + * working set size, and potentially reducing performance.
>> + *
>> + * @b Example
>> + *
>> + * Below is an example of the use of an lcore variable:
>> + *
>> + * \code{.c}
>> + * struct foo_lcore_state {
>> + *         int a;
>> + *         long b;
>> + * };
>> + *
>> + * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
>> + *
>> + * long foo_get_a_plus_b(void)
>> + * {
>> + *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
>> + *
>> + *         return state->a + state->b;
>> + * }
>> + *
>> + * RTE_INIT(rte_foo_init)
>> + * {
>> + *         unsigned int lcore_id;
>> + *
>> + *         RTE_LCORE_VAR_ALLOC(foo_state);
>> + *
>> + *         struct foo_lcore_state *state;
>> + *         RTE_LCORE_VAR_FOREACH_VALUE(lcore_states) {
>> + *                 (initialize 'state')
>> + *         }
>> + *
>> + *         (other initialization)
>> + * }
>> + * \endcode
>> + *
>> + *
>> + * @b Alternatives
>> + *
>> + * Lcore variables are designed to replace a pattern exemplified below:
>> + * \code{.c}
>> + * struct foo_lcore_state {
>> + *         int a;
>> + *         long b;
>> + *         RTE_CACHE_GUARD;
>> + * } __rte_cache_aligned;
>> + *
>> + * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
>> + * \endcode
>> + *
>> + * This scheme is simple and effective, but has one drawback: the data
>> + * is organized so that objects related to all lcores for a particular
>> + * module is kept close in memory. At a bare minimum, this forces the
>> + * use of cache-line alignment to avoid false sharing. With CPU
>> + * hardware prefetching and memory loads resulting from speculative
>> + * execution (functions which seemingly are getting more eager faster
>> + * than they are getting more intelligent), one or more "guard" cache
>> + * lines may be required to separate one lcore's data from another's.
>> + *
>> + * Lcore variables has the upside of working with, not against, the
>> + * CPU's assumptions and for example next-line prefetchers may well
>> + * work the way its designers intended (i.e., to the benefit, not
>> + * detriment, of system performance).
>> + *
>> + * Another alternative to \ref rte_lcore_var.h is the \ref
>> + * rte_per_lcore.h API, which make use of thread-local storage (TLS,
>> + * e.g., GCC __thread or C11 _Thread_local). The main differences
>> + * between by using the various forms of TLS (e.g., \ref
>> + * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
>> + * variables are:
>> + *
>> + *   * The existence and non-existence of a thread-local variable
>> + *     instance follow that of particular thread's. The data cannot be
>> + *     accessed before the thread has been created, nor after it has
>> + *     exited. One effect of this is thread-local variables must
>> + *     initialized in a "lazy" manner (e.g., at the point of thread
>> + *     creation). Lcore variables may be accessed immediately after
>> + *     having been allocated (which is usually prior any thread beyond
>> + *     the main thread is running).
>> + *   * A thread-local variable is duplicated across all threads in the
>> + *     process, including unregistered non-EAL threads (i.e.,
>> + *     "regular" threads). For DPDK applications heavily relying on
>> + *     multi-threading (in conjunction to DPDK's "one thread per core"
>> + *     pattern), either by having many concurrent threads or
>> + *     creating/destroying threads at a high rate, an excessive use of
>> + *     thread-local variables may cause inefficiencies (e.g.,
>> + *     increased thread creation overhead due to thread-local storage
>> + *     initialization or increased total RAM footprint usage). Lcore
>> + *     variables *only* exist for threads with an lcore id, and thus
>> + *     not for such "regular" threads.
>> + *   * If data in thread-local storage may be shared between threads
>> + *     (i.e., can a pointer to a thread-local variable be passed to
>> + *     and successfully dereferenced by non-owning thread) depends on
>> + *     the details of the TLS implementation. With GCC __thread and
>> + *     GCC _Thread_local, such data sharing is supported. In the C11
>> + *     standard, the result of accessing another thread's
>> + *     _Thread_local object is implementation-defined. Lcore variable
>> + *     instances may be accessed reliably by any thread.
>> + */
>> +
>> +#ifdef __cplusplus
>> +extern "C" {
>> +#endif
>> +
>> +#include <stddef.h>
>> +#include <stdalign.h>
>> +
>> +#include <rte_common.h>
>> +#include <rte_config.h>
>> +#include <rte_lcore.h>
>> +
>> +/**
>> + * Given the lcore variable type, produces the type of the lcore
>> + * variable handle.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)                \
>> +       type *
>> +
>> +/**
>> + * Define a lcore variable handle.
>> + *
>> + * This macro defines a variable which is used as a handle to access
>> + * the various per-lcore id instances of a per-lcore id variable.
>> + *
>> + * The aim with this macro is to make clear at the point of
>> + * declaration that this is an lcore handler, rather than a regular
>> + * pointer.
>> + *
>> + * Add @b static as a prefix in case the lcore variable are only to be
>> + * accessed from a particular translation unit.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE(type, name)       \
>> +       RTE_LCORE_VAR_HANDLE_TYPE(type) name
>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align)      \
>> +       name = rte_lcore_var_alloc(size, align)
>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle,
>> + * with values aligned for any type of object.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)   \
>> +       name = rte_lcore_var_alloc(size, 0)
>> +
>> +/**
>> + * Allocate space for an lcore variable of the size and alignment requirements
>> + * suggested by the handler pointer type, and initialize its handle.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC(name)                                      \
>> +       RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, sizeof(*(name)),           \
>> +                                      alignof(typeof(*(name))))
>> +
>> +/**
>> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
>> + * means of a \ref RTE_INIT constructor.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)               \
>> +       RTE_INIT(rte_lcore_var_init_ ## name)                           \
>> +       {                                                               \
>> +               RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);      \
>> +       }
>> +
>> +/**
>> + * Allocate an explicitly-sized lcore variable by means of a \ref
>> + * RTE_INIT constructor.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)            \
>> +       RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
>> +
>> +/**
>> + * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
>> + */
>> +#define RTE_LCORE_VAR_INIT(name)                                       \
>> +       RTE_INIT(rte_lcore_var_init_ ## name)                           \
>> +       {                                                               \
>> +               RTE_LCORE_VAR_ALLOC(name);                              \
>> +       }
>> +
>> +#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)              \
>> +       ((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))
>> +
>> +/**
>> + * Get pointer to lcore variable instance with the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)                                \
>> +       ((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
>> +
>> +/**
>> + * Get value of a lcore variable instance of the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)                \
>> +       (*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))
>> +
>> +/**
>> + * Set the value of a lcore variable instance of the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)         \
>> +       (*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
>> +
>> +/**
>> + * Get pointer to lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
>> +
>> +/**
>> + * Get value of lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
>> +
>> +/**
>> + * Set value of lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_SET(name, value) \
>> +       RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
>> +
>> +/**
>> + * Iterate over each lcore id's value for a lcore variable.
>> + */
>> +#define RTE_LCORE_VAR_FOREACH_VALUE(var, name)                         \
>> +       for (unsigned int lcore_id =                                    \
>> +                    (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);   \
>> +            lcore_id < RTE_MAX_LCORE;                                  \
>> +            lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
>> +
>> +extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
>> +
>> +/**
>> + * Allocate space in the per-lcore id buffers for a lcore variable.
>> + *
>> + * The pointer returned is only an opaque identifer of the variable. To
>> + * get an actual pointer to a particular instance of the variable use
>> + * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
>> + *
>> + * The allocation is always successful, barring a fatal exhaustion of
>> + * the per-lcore id buffer space.
>> + *
>> + * @param size
>> + *   The size (in bytes) of the variable's per-lcore id value.
>> + * @param align
>> + *   If 0, the values will be suitably aligned for any kind of type
>> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
>> + *   on a multiple of *align*, which must be a power of 2 and equal or
>> + *   less than \c RTE_CACHE_LINE_SIZE.
>> + * @return
>> + *   The id of the variable, stored in a void pointer value.
>> + */
>> +__rte_experimental
>> +void *
>> +rte_lcore_var_alloc(size_t size, size_t align);
>> +
>> +#ifdef __cplusplus
>> +}
>> +#endif
>> +
>> +#endif /* _RTE_LCORE_VAR_H_ */
>> diff --git a/lib/eal/version.map b/lib/eal/version.map
>> index 5e0cd47c82..e90b86115a 100644
>> --- a/lib/eal/version.map
>> +++ b/lib/eal/version.map
>> @@ -393,6 +393,10 @@ EXPERIMENTAL {
>>          # added in 23.07
>>          rte_memzone_max_get;
>>          rte_memzone_max_set;
>> +
>> +       # added in 24.03
>> +       rte_lcore_var_alloc;
>> +       rte_lcore_var;
>>   };
>>
>>   INTERNAL {
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-20  9:11           ` Bruce Richardson
  2024-02-21  9:43           ` Jerin Jacob
@ 2024-02-22  9:22           ` Morten Brørup
  2024-02-23 10:12             ` Mattias Rönnblom
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
  3 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-22  9:22 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Stephen Hemminger

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Tuesday, 20 February 2024 09.49
> 
> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small chunks of often-used data, which is related logically, but where
> there are performance benefits to reap from having updates being local
> to an lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> RFC v3:
>  * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>  * Update example to reflect FOREACH macro name change (in RFC v2).
> 
> RFC v2:
>  * Use alignof to derive alignment requirements. (Morten Brørup)
>  * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>    *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>  * Allow user-specified alignment, but limit max to cache line size.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---
>  config/rte_config.h                   |   1 +
>  doc/api/doxy-api-index.md             |   1 +
>  lib/eal/common/eal_common_lcore_var.c |  82 ++++++
>  lib/eal/common/meson.build            |   1 +
>  lib/eal/include/meson.build           |   1 +
>  lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
>  lib/eal/version.map                   |   4 +
>  7 files changed, 465 insertions(+)
>  create mode 100644 lib/eal/common/eal_common_lcore_var.c
>  create mode 100644 lib/eal/include/rte_lcore_var.h
> 
> diff --git a/config/rte_config.h b/config/rte_config.h
> index da265d7dd2..884482e473 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -30,6 +30,7 @@
>  /* EAL defines */
>  #define RTE_CACHE_GUARD_LINES 1
>  #define RTE_MAX_HEAPS 32
> +#define RTE_MAX_LCORE_VAR 1048576
>  #define RTE_MAX_MEMSEG_LISTS 128
>  #define RTE_MAX_MEMSEG_PER_LIST 8192
>  #define RTE_MAX_MEM_MB_PER_LIST 32768
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index a6a768bd7c..bb06bb7ca1 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -98,6 +98,7 @@ The public API headers are grouped by topics:
>    [interrupts](@ref rte_interrupts.h),
>    [launch](@ref rte_launch.h),
>    [lcore](@ref rte_lcore.h),
> +  [lcore-varible](@ref rte_lcore_var.h),
>    [per-lcore](@ref rte_per_lcore.h),
>    [service cores](@ref rte_service.h),
>    [keepalive](@ref rte_keepalive.h),
> diff --git a/lib/eal/common/eal_common_lcore_var.c
> b/lib/eal/common/eal_common_lcore_var.c
> new file mode 100644
> index 0000000000..dfd11cbd0b
> --- /dev/null
> +++ b/lib/eal/common/eal_common_lcore_var.c
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#include <inttypes.h>
> +
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +
> +#include <rte_lcore_var.h>
> +
> +#include "eal_private.h"
> +
> +#define WARN_THRESHOLD 75

It's not an error condition, so 75 % seems like a low threshold for WARNING.
Consider increasing it to 95 %, or change the level to NOTICE.
Or both.

> +
> +/*
> + * Avoid using offset zero, since it would result in a NULL-value
> + * "handle" (offset) pointer, which in principle and per the API
> + * definition shouldn't be an issue, but may confuse some tools and
> + * users.
> + */
> +#define INITIAL_OFFSET 1
> +
> +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
> +
> +static uintptr_t allocated = INITIAL_OFFSET;

Please add an API to get the amount of allocated lcore variable memory.
The easy option is to make the above variable public (with a proper name, e.g. rte_lcore_var_allocated).

The total amount of lcore variable memory is already public: RTE_MAX_LCORE_VAR.

> +
> +static void
> +verify_allocation(uintptr_t new_allocated)
> +{
> +	static bool has_warned;
> +
> +	RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
> +
> +	if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
> +	    !has_warned) {
> +		EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
> +			"of the maximum capacity (%d bytes)", WARN_THRESHOLD,
> +			RTE_MAX_LCORE_VAR);
> +		has_warned = true;
> +	}
> +}
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +	uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, align);
> +
> +	void *offset = (void *)new_allocated;
> +
> +	new_allocated += size;
> +
> +	verify_allocation(new_allocated);
> +
> +	allocated = new_allocated;
> +
> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> +		"%"PRIuPTR"-byte alignment", size, align);
> +
> +	return offset;
> +}
> +
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align)
> +{
> +	/* Having the per-lcore buffer size aligned on cache lines
> +	 * assures as well as having the base pointer aligned on cache
> +	 * size assures that aligned offsets also translate to aligned
> +	 * pointers across all values.
> +	 */
> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> +
> +	/* '0' means asking for worst-case alignment requirements */
> +	if (align == 0)
> +		align = alignof(max_align_t);
> +
> +	RTE_ASSERT(rte_is_power_of_2(align));
> +
> +	return lcore_var_alloc(size, align);
> +}
> diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
> index 22a626ba6f..d41403680b 100644
> --- a/lib/eal/common/meson.build
> +++ b/lib/eal/common/meson.build
> @@ -18,6 +18,7 @@ sources += files(
>          'eal_common_interrupts.c',
>          'eal_common_launch.c',
>          'eal_common_lcore.c',
> +        'eal_common_lcore_var.c',
>          'eal_common_mcfg.c',
>          'eal_common_memalloc.c',
>          'eal_common_memory.c',
> diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
> index e94b056d46..9449253e23 100644
> --- a/lib/eal/include/meson.build
> +++ b/lib/eal/include/meson.build
> @@ -27,6 +27,7 @@ headers += files(
>          'rte_keepalive.h',
>          'rte_launch.h',
>          'rte_lcore.h',
> +        'rte_lcore_var.h',
>          'rte_lock_annotations.h',
>          'rte_malloc.h',
>          'rte_mcslock.h',
> diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
> new file mode 100644
> index 0000000000..da49d48d7c
> --- /dev/null
> +++ b/lib/eal/include/rte_lcore_var.h
> @@ -0,0 +1,375 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#ifndef _RTE_LCORE_VAR_H_
> +#define _RTE_LCORE_VAR_H_
> +
> +/**
> + * @file
> + *
> + * RTE Per-lcore id variables
> + *
> + * This API provides a mechanism to create and access per-lcore id
> + * variables in a space- and cycle-efficient manner.
> + *
> + * A per-lcore id variable (or lcore variable for short) has one value
> + * for each EAL thread and registered non-EAL thread. In other words,
> + * there's one copy of its value for each and every current and future
> + * lcore id-equipped thread, with the total number of copies amounting
> + * to \c RTE_MAX_LCORE.
> + *
> + * In order to access the values of an lcore variable, a handle is
> + * used. The type of the handle is a pointer to the value's type
> + * (e.g., for \c uint32_t lcore variable, the handle is a
> + * <code>uint32_t *</code>. A handle may be passed between modules and
> + * threads just like any pointer, but its value is not the address of
> + * any particular object, but rather just an opaque identifier, stored
> + * in a typed pointer (to inform the access macro the type of values).
> + *
> + * @b Creation
> + *
> + * An lcore variable is created in two steps:
> + *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
> + *  2. Allocate lcore variable storage and initialize the handle with
> + *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
> + *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
> + *     module initialization, but may be done at any time.
> + *
> + * An lcore variable is not tied to the owning thread's lifetime. It's
> + * available for use by any thread immediately after having been
> + * allocated, and continues to be available throughout the lifetime of
> + * the EAL.
> + *
> + * Lcore variables cannot and need not be freed.
> + *
> + * @b Access
> + *
> + * The value of any lcore variable for any lcore id may be accessed
> + * from any thread (including unregistered threads), but is should
> + * generally only *frequently* read from or written to by the owner.
> + *
> + * Values of the same lcore variable but owned by to different lcore
> + * ids *may* be frequently read or written by the owners without the
> + * risk of false sharing.
> + *
> + * An appropriate synchronization mechanism (e.g., atomics) should
> + * employed to assure there are no data races between the owning
> + * thread and any non-owner threads accessing the same lcore variable
> + * instance.
> + *
> + * The value of the lcore variable for a particular lcore id may be
> + * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
> + * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
> + *
> + * To modify the value of an lcore variable for a particular lcore id,
> + * either access the object through the pointer retrieved by \ref
> + * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
> + * RTE_LCORE_VAR_LCORE_SET.
> + *
> + * The access macros each has a short-hand which may be used by an EAL
> + * thread or registered non-EAL thread to access the lcore variable
> + * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
> + * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
> + *
> + * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
> + * pointer with the same type as the value, it may not be directly
> + * dereferenced and must be treated as an opaque identifier. The
> + * *identifier* value is common across all lcore ids.
> + *
> + * @b Storage
> + *
> + * An lcore variable's values may by of a primitive type like \c int,
> + * but would more typically be a \c struct. An application may choose
> + * to define an lcore variable, which it then it goes on to never
> + * allocate.
> + *
> + * The lcore variable handle introduces a per-variable (not
> + * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
> + * there are some memory footprint gains to be made by organizing all
> + * per-lcore id data for a particular module as one lcore variable
> + * (e.g., as a struct).
> + *
> + * The sum of all lcore variables, plus any padding required, must be
> + * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
> + * violation of this maximum results in the process being terminated.
> + *
> + * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
> + * same order of magnitude in size as a thread stack.
> + *
> + * The lcore variable storage buffers are kept in the BSS section in
> + * the resulting binary, where data generally isn't mapped in until
> + * it's accessed. This means that unused portions of the lcore
> + * variable storage area will not occupy any physical memory (with a
> + * granularity of the memory page size [usually 4 kB]).
> + *
> + * Lcore variables should generally *not* be \ref __rte_cache_aligned
> + * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
> + * of these constructs are designed to avoid false sharing. In the
> + * case of an lcore variable instance, all nearby data structures
> + * should almost-always be written to by a single thread (the lcore
> + * variable owner). Adding padding will increase the effective memory
> + * working set size, and potentially reducing performance.
> + *
> + * @b Example
> + *
> + * Below is an example of the use of an lcore variable:
> + *
> + * \code{.c}
> + * struct foo_lcore_state {
> + *         int a;
> + *         long b;
> + * };
> + *
> + * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
> + *
> + * long foo_get_a_plus_b(void)
> + * {
> + *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
> + *
> + *         return state->a + state->b;
> + * }
> + *
> + * RTE_INIT(rte_foo_init)
> + * {
> + *         unsigned int lcore_id;

This variable is part of RTE_LCORE_VAR_FOREACH_VALUE(), and can be removed from here.

> + *
> + *         RTE_LCORE_VAR_ALLOC(foo_state);

Typo: foo_state -> lcore_states

> + *
> + *         struct foo_lcore_state *state;
> + *         RTE_LCORE_VAR_FOREACH_VALUE(lcore_states) {

Typo:
RTE_LCORE_VAR_FOREACH_VALUE(lcore_states)
->
RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states)

> + *                 (initialize 'state')
> + *         }
> + *
> + *         (other initialization)
> + * }
> + * \endcode
> + *
> + *
> + * @b Alternatives
> + *
> + * Lcore variables are designed to replace a pattern exemplified below:
> + * \code{.c}
> + * struct foo_lcore_state {
> + *         int a;
> + *         long b;
> + *         RTE_CACHE_GUARD;
> + * } __rte_cache_aligned;
> + *
> + * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
> + * \endcode
> + *
> + * This scheme is simple and effective, but has one drawback: the data
> + * is organized so that objects related to all lcores for a particular
> + * module is kept close in memory. At a bare minimum, this forces the
> + * use of cache-line alignment to avoid false sharing. With CPU
> + * hardware prefetching and memory loads resulting from speculative
> + * execution (functions which seemingly are getting more eager faster
> + * than they are getting more intelligent), one or more "guard" cache
> + * lines may be required to separate one lcore's data from another's.
> + *
> + * Lcore variables has the upside of working with, not against, the
> + * CPU's assumptions and for example next-line prefetchers may well
> + * work the way its designers intended (i.e., to the benefit, not
> + * detriment, of system performance).
> + *
> + * Another alternative to \ref rte_lcore_var.h is the \ref
> + * rte_per_lcore.h API, which make use of thread-local storage (TLS,
> + * e.g., GCC __thread or C11 _Thread_local). The main differences
> + * between by using the various forms of TLS (e.g., \ref
> + * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
> + * variables are:
> + *
> + *   * The existence and non-existence of a thread-local variable
> + *     instance follow that of particular thread's. The data cannot be
> + *     accessed before the thread has been created, nor after it has
> + *     exited. One effect of this is thread-local variables must
> + *     initialized in a "lazy" manner (e.g., at the point of thread
> + *     creation). Lcore variables may be accessed immediately after
> + *     having been allocated (which is usually prior any thread beyond
> + *     the main thread is running).
> + *   * A thread-local variable is duplicated across all threads in the
> + *     process, including unregistered non-EAL threads (i.e.,
> + *     "regular" threads). For DPDK applications heavily relying on
> + *     multi-threading (in conjunction to DPDK's "one thread per core"
> + *     pattern), either by having many concurrent threads or
> + *     creating/destroying threads at a high rate, an excessive use of
> + *     thread-local variables may cause inefficiencies (e.g.,
> + *     increased thread creation overhead due to thread-local storage
> + *     initialization or increased total RAM footprint usage). Lcore
> + *     variables *only* exist for threads with an lcore id, and thus
> + *     not for such "regular" threads.
> + *   * If data in thread-local storage may be shared between threads
> + *     (i.e., can a pointer to a thread-local variable be passed to
> + *     and successfully dereferenced by non-owning thread) depends on
> + *     the details of the TLS implementation. With GCC __thread and
> + *     GCC _Thread_local, such data sharing is supported. In the C11
> + *     standard, the result of accessing another thread's
> + *     _Thread_local object is implementation-defined. Lcore variable
> + *     instances may be accessed reliably by any thread.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stddef.h>
> +#include <stdalign.h>
> +
> +#include <rte_common.h>
> +#include <rte_config.h>
> +#include <rte_lcore.h>
> +
> +/**
> + * Given the lcore variable type, produces the type of the lcore
> + * variable handle.
> + */
> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
> +	type *

This macro seems superfluous.
In RTE_LCORE_VAR_HANDLE(type, name) just use:
 type * name
Are there other use cases for it?

> +
> +/**
> + * Define a lcore variable handle.
> + *
> + * This macro defines a variable which is used as a handle to access
> + * the various per-lcore id instances of a per-lcore id variable.
> + *
> + * The aim with this macro is to make clear at the point of
> + * declaration that this is an lcore handler, rather than a regular
> + * pointer.
> + *
> + * Add @b static as a prefix in case the lcore variable are only to be
> + * accessed from a particular translation unit.
> + */
> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name

Thinking out loud here...
Consider if this name should be more similar with RTE_DEFINE_PER_LCORE(type, name), e.g. RTE_DEFINE_LCORE_VAR(type, name) or RTE_LCORE_VAR_DEFINE(type, name).
Using the common prefix RTE_LCORE_VAR is preferable.
Using the term "handle" indicates that it is opaque and needs to be allocated by an allocation function.
On the other hand, the "handle" is not unique per thread, so it's nor really a "handle".

> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle.
> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align)	\
> +	name = rte_lcore_var_alloc(size, align)
> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle,
> + * with values aligned for any type of object.
> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
> +	name = rte_lcore_var_alloc(size, 0)
> +
> +/**
> + * Allocate space for an lcore variable of the size and alignment
> requirements
> + * suggested by the handler pointer type, and initialize its handle.
> + */
> +#define RTE_LCORE_VAR_ALLOC(name)					\
> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, sizeof(*(name)),		\
> +				       alignof(typeof(*(name))))
> +
> +/**
> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
> + * means of a \ref RTE_INIT constructor.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> +	{								\
> +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
> +	}
> +
> +/**
> + * Allocate an explicitly-sized lcore variable by means of a \ref
> + * RTE_INIT constructor.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
> +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
> +
> +/**
> + * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
> + */
> +#define RTE_LCORE_VAR_INIT(name)					\
> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> +	{								\
> +		RTE_LCORE_VAR_ALLOC(name);				\
> +	}
> +
> +#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)		\
> +	((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))

This macro also seems superfluous.
Doesn't RTE_LCORE_VAR_LCORE_PTR() suffice?

> +
> +/**
> + * Get pointer to lcore variable instance with the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)				\
> +	((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))

This uses type casting.
I wonder if additional build-time type checking would be possible...
Nice to have: The compiler should fail if name is not a pointer, but a struct or an uint64_t, or even an uintptr_t.

> +
> +/**
> + * Get value of a lcore variable instance of the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)		\
> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))

The four accessor functions, RTE_LCORE_VAR[_LCORE]_GET/SET(), seem superfluous.
They make the API seem more complex than just using RTE_LCORE_VAR[_LCORE]_PTR() for access.

> +
> +/**
> + * Set the value of a lcore variable instance of the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)		\
> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
> +
> +/**
> + * Get pointer to lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
> +
> +/**
> + * Get value of lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
> +
> +/**
> + * Set value of lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_SET(name, value) \
> +	RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
> +
> +/**
> + * Iterate over each lcore id's value for a lcore variable.
> + */
> +#define RTE_LCORE_VAR_FOREACH_VALUE(var, name)				\
> +	for (unsigned int lcore_id =					\
> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
> +	     lcore_id < RTE_MAX_LCORE;					\
> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))

RTE_LCORE_VAR_FOREACH_PTR(ptr, name) would be an even better name; considering that "var" is really a pointer.

I also wonder about build-time type checking here...
Nice to have: The compiler should fail if "ptr" is not a pointer.

> +
> +extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
> +
> +/**
> + * Allocate space in the per-lcore id buffers for a lcore variable.
> + *
> + * The pointer returned is only an opaque identifer of the variable. To
> + * get an actual pointer to a particular instance of the variable use
> + * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
> + *
> + * The allocation is always successful, barring a fatal exhaustion of
> + * the per-lcore id buffer space.
> + *
> + * @param size
> + *   The size (in bytes) of the variable's per-lcore id value.
> + * @param align
> + *   If 0, the values will be suitably aligned for any kind of type
> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
> + *   on a multiple of *align*, which must be a power of 2 and equal or
> + *   less than \c RTE_CACHE_LINE_SIZE.
> + * @return
> + *   The id of the variable, stored in a void pointer value.
> + */
> +__rte_experimental
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_LCORE_VAR_H_ */
> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index 5e0cd47c82..e90b86115a 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -393,6 +393,10 @@ EXPERIMENTAL {
>  	# added in 23.07
>  	rte_memzone_max_get;
>  	rte_memzone_max_set;
> +
> +	# added in 24.03
> +	rte_lcore_var_alloc;
> +	rte_lcore_var;
>  };
> 
>  INTERNAL {
> --
> 2.34.1

Acked-by: Morten Brørup <mb@smartsharesystems.com>


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v3 5/6] service: keep per-lcore state in lcore variable
  2024-02-20  8:49         ` [RFC v3 5/6] service: " Mattias Rönnblom
@ 2024-02-22  9:42           ` Morten Brørup
  2024-02-23 10:19             ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-22  9:42 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Stephen Hemminger

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Tuesday, 20 February 2024 09.49
> 
> Replace static array of cache-aligned structs with an lcore variable,
> to slightly benefit code simplicity and performance.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---


> @@ -486,8 +489,7 @@ service_runner_func(void *arg)
>  {
>  	RTE_SET_USED(arg);
>  	uint8_t i;
> -	const int lcore = rte_lcore_id();
> -	struct core_state *cs = &lcore_states[lcore];
> +	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);

Typo: TAB -> SPACE.

> 
>  	rte_atomic_store_explicit(&cs->thread_active, 1,
> rte_memory_order_seq_cst);
> 
> @@ -533,13 +535,16 @@ service_runner_func(void *arg)
>  int32_t
>  rte_service_lcore_may_be_active(uint32_t lcore)
>  {
> -	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
> +	struct core_state *cs =
> +		RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
> +
> +	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
>  		return -EINVAL;

This comment is mostly related to patch 1 in the series...

You are setting cs = RTE_LCORE_VAR_LCORE_PTR(lcore, ...) before validating that lcore < RTE_MAX_LCORE. I wondered if that potentially was an overrun bug.

It is obvious when looking at the RTE_LCORE_VAR_LCORE_PTR() macro implementation, but perhaps its description could mention that it is safe to use with an "invalid" lcore_id, but not dereferencing it.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 1/6] eal: add static per-lcore memory allocation facility
  2024-02-22  9:22           ` Morten Brørup
@ 2024-02-23 10:12             ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-23 10:12 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-22 10:22, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Tuesday, 20 February 2024 09.49
>>
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small chunks of often-used data, which is related logically, but where
>> there are performance benefits to reap from having updates being local
>> to an lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> RFC v3:
>>   * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>>   * Update example to reflect FOREACH macro name change (in RFC v2).
>>
>> RFC v2:
>>   * Use alignof to derive alignment requirements. (Morten Brørup)
>>   * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>>     *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>>   * Allow user-specified alignment, but limit max to cache line size.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
>>   config/rte_config.h                   |   1 +
>>   doc/api/doxy-api-index.md             |   1 +
>>   lib/eal/common/eal_common_lcore_var.c |  82 ++++++
>>   lib/eal/common/meson.build            |   1 +
>>   lib/eal/include/meson.build           |   1 +
>>   lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
>>   lib/eal/version.map                   |   4 +
>>   7 files changed, 465 insertions(+)
>>   create mode 100644 lib/eal/common/eal_common_lcore_var.c
>>   create mode 100644 lib/eal/include/rte_lcore_var.h
>>
>> diff --git a/config/rte_config.h b/config/rte_config.h
>> index da265d7dd2..884482e473 100644
>> --- a/config/rte_config.h
>> +++ b/config/rte_config.h
>> @@ -30,6 +30,7 @@
>>   /* EAL defines */
>>   #define RTE_CACHE_GUARD_LINES 1
>>   #define RTE_MAX_HEAPS 32
>> +#define RTE_MAX_LCORE_VAR 1048576
>>   #define RTE_MAX_MEMSEG_LISTS 128
>>   #define RTE_MAX_MEMSEG_PER_LIST 8192
>>   #define RTE_MAX_MEM_MB_PER_LIST 32768
>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>> index a6a768bd7c..bb06bb7ca1 100644
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -98,6 +98,7 @@ The public API headers are grouped by topics:
>>     [interrupts](@ref rte_interrupts.h),
>>     [launch](@ref rte_launch.h),
>>     [lcore](@ref rte_lcore.h),
>> +  [lcore-varible](@ref rte_lcore_var.h),
>>     [per-lcore](@ref rte_per_lcore.h),
>>     [service cores](@ref rte_service.h),
>>     [keepalive](@ref rte_keepalive.h),
>> diff --git a/lib/eal/common/eal_common_lcore_var.c
>> b/lib/eal/common/eal_common_lcore_var.c
>> new file mode 100644
>> index 0000000000..dfd11cbd0b
>> --- /dev/null
>> +++ b/lib/eal/common/eal_common_lcore_var.c
>> @@ -0,0 +1,82 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#include <inttypes.h>
>> +
>> +#include <rte_common.h>
>> +#include <rte_debug.h>
>> +#include <rte_log.h>
>> +
>> +#include <rte_lcore_var.h>
>> +
>> +#include "eal_private.h"
>> +
>> +#define WARN_THRESHOLD 75
> 
> It's not an error condition, so 75 % seems like a low threshold for WARNING.
> Consider increasing it to 95 %, or change the level to NOTICE.
> Or both.
> 

I'll make an attempt at a variant which uses the libc heap instead of 
BSS, and does so dynamically. Then one need not worry about a fixed-size 
upper bound, barring heap allocation failures (which you are best off 
making fatal in the lcore variables case).

The glibc heap is available early (as early as the earliest RTE_INIT()).

You also avoid the headache of thinking about what happens if indeed all 
of the rte_lcore_var array is backed by actual memory. That could be due 
to mlockall(), huge page use for BSS, or systems where BSS is not 
on-demand mapped. I have no idea how paging works on Windows NT, for 
example.

>> +
>> +/*
>> + * Avoid using offset zero, since it would result in a NULL-value
>> + * "handle" (offset) pointer, which in principle and per the API
>> + * definition shouldn't be an issue, but may confuse some tools and
>> + * users.
>> + */
>> +#define INITIAL_OFFSET 1
>> +
>> +char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
>> +
>> +static uintptr_t allocated = INITIAL_OFFSET;
> 
> Please add an API to get the amount of allocated lcore variable memory.
> The easy option is to make the above variable public (with a proper name, e.g. rte_lcore_var_allocated).
> 
> The total amount of lcore variable memory is already public: RTE_MAX_LCORE_VAR.
> 

Makes sense with the RFC v3 design.

If you eliminate the fixed upper bound and use the heap, there shouldn't 
be any particular need to track lcore variable memory use separately 
from other heap users.

>> +
>> +static void
>> +verify_allocation(uintptr_t new_allocated)
>> +{
>> +	static bool has_warned;
>> +
>> +	RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
>> +
>> +	if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
>> +	    !has_warned) {
>> +		EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
>> +			"of the maximum capacity (%d bytes)", WARN_THRESHOLD,
>> +			RTE_MAX_LCORE_VAR);
>> +		has_warned = true;
>> +	}
>> +}
>> +
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, align);
>> +
>> +	void *offset = (void *)new_allocated;
>> +
>> +	new_allocated += size;
>> +
>> +	verify_allocation(new_allocated);
>> +
>> +	allocated = new_allocated;
>> +
>> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>> +		"%"PRIuPTR"-byte alignment", size, align);
>> +
>> +	return offset;
>> +}
>> +
>> +void *
>> +rte_lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	/* Having the per-lcore buffer size aligned on cache lines
>> +	 * assures as well as having the base pointer aligned on cache
>> +	 * size assures that aligned offsets also translate to aligned
>> +	 * pointers across all values.
>> +	 */
>> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
>> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
>> +
>> +	/* '0' means asking for worst-case alignment requirements */
>> +	if (align == 0)
>> +		align = alignof(max_align_t);
>> +
>> +	RTE_ASSERT(rte_is_power_of_2(align));
>> +
>> +	return lcore_var_alloc(size, align);
>> +}
>> diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
>> index 22a626ba6f..d41403680b 100644
>> --- a/lib/eal/common/meson.build
>> +++ b/lib/eal/common/meson.build
>> @@ -18,6 +18,7 @@ sources += files(
>>           'eal_common_interrupts.c',
>>           'eal_common_launch.c',
>>           'eal_common_lcore.c',
>> +        'eal_common_lcore_var.c',
>>           'eal_common_mcfg.c',
>>           'eal_common_memalloc.c',
>>           'eal_common_memory.c',
>> diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
>> index e94b056d46..9449253e23 100644
>> --- a/lib/eal/include/meson.build
>> +++ b/lib/eal/include/meson.build
>> @@ -27,6 +27,7 @@ headers += files(
>>           'rte_keepalive.h',
>>           'rte_launch.h',
>>           'rte_lcore.h',
>> +        'rte_lcore_var.h',
>>           'rte_lock_annotations.h',
>>           'rte_malloc.h',
>>           'rte_mcslock.h',
>> diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
>> new file mode 100644
>> index 0000000000..da49d48d7c
>> --- /dev/null
>> +++ b/lib/eal/include/rte_lcore_var.h
>> @@ -0,0 +1,375 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#ifndef _RTE_LCORE_VAR_H_
>> +#define _RTE_LCORE_VAR_H_
>> +
>> +/**
>> + * @file
>> + *
>> + * RTE Per-lcore id variables
>> + *
>> + * This API provides a mechanism to create and access per-lcore id
>> + * variables in a space- and cycle-efficient manner.
>> + *
>> + * A per-lcore id variable (or lcore variable for short) has one value
>> + * for each EAL thread and registered non-EAL thread. In other words,
>> + * there's one copy of its value for each and every current and future
>> + * lcore id-equipped thread, with the total number of copies amounting
>> + * to \c RTE_MAX_LCORE.
>> + *
>> + * In order to access the values of an lcore variable, a handle is
>> + * used. The type of the handle is a pointer to the value's type
>> + * (e.g., for \c uint32_t lcore variable, the handle is a
>> + * <code>uint32_t *</code>. A handle may be passed between modules and
>> + * threads just like any pointer, but its value is not the address of
>> + * any particular object, but rather just an opaque identifier, stored
>> + * in a typed pointer (to inform the access macro the type of values).
>> + *
>> + * @b Creation
>> + *
>> + * An lcore variable is created in two steps:
>> + *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
>> + *  2. Allocate lcore variable storage and initialize the handle with
>> + *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
>> + *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
>> + *     module initialization, but may be done at any time.
>> + *
>> + * An lcore variable is not tied to the owning thread's lifetime. It's
>> + * available for use by any thread immediately after having been
>> + * allocated, and continues to be available throughout the lifetime of
>> + * the EAL.
>> + *
>> + * Lcore variables cannot and need not be freed.
>> + *
>> + * @b Access
>> + *
>> + * The value of any lcore variable for any lcore id may be accessed
>> + * from any thread (including unregistered threads), but is should
>> + * generally only *frequently* read from or written to by the owner.
>> + *
>> + * Values of the same lcore variable but owned by to different lcore
>> + * ids *may* be frequently read or written by the owners without the
>> + * risk of false sharing.
>> + *
>> + * An appropriate synchronization mechanism (e.g., atomics) should
>> + * employed to assure there are no data races between the owning
>> + * thread and any non-owner threads accessing the same lcore variable
>> + * instance.
>> + *
>> + * The value of the lcore variable for a particular lcore id may be
>> + * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
>> + * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
>> + *
>> + * To modify the value of an lcore variable for a particular lcore id,
>> + * either access the object through the pointer retrieved by \ref
>> + * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
>> + * RTE_LCORE_VAR_LCORE_SET.
>> + *
>> + * The access macros each has a short-hand which may be used by an EAL
>> + * thread or registered non-EAL thread to access the lcore variable
>> + * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
>> + * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
>> + *
>> + * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
>> + * pointer with the same type as the value, it may not be directly
>> + * dereferenced and must be treated as an opaque identifier. The
>> + * *identifier* value is common across all lcore ids.
>> + *
>> + * @b Storage
>> + *
>> + * An lcore variable's values may by of a primitive type like \c int,
>> + * but would more typically be a \c struct. An application may choose
>> + * to define an lcore variable, which it then it goes on to never
>> + * allocate.
>> + *
>> + * The lcore variable handle introduces a per-variable (not
>> + * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
>> + * there are some memory footprint gains to be made by organizing all
>> + * per-lcore id data for a particular module as one lcore variable
>> + * (e.g., as a struct).
>> + *
>> + * The sum of all lcore variables, plus any padding required, must be
>> + * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
>> + * violation of this maximum results in the process being terminated.
>> + *
>> + * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
>> + * same order of magnitude in size as a thread stack.
>> + *
>> + * The lcore variable storage buffers are kept in the BSS section in
>> + * the resulting binary, where data generally isn't mapped in until
>> + * it's accessed. This means that unused portions of the lcore
>> + * variable storage area will not occupy any physical memory (with a
>> + * granularity of the memory page size [usually 4 kB]).
>> + *
>> + * Lcore variables should generally *not* be \ref __rte_cache_aligned
>> + * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
>> + * of these constructs are designed to avoid false sharing. In the
>> + * case of an lcore variable instance, all nearby data structures
>> + * should almost-always be written to by a single thread (the lcore
>> + * variable owner). Adding padding will increase the effective memory
>> + * working set size, and potentially reducing performance.
>> + *
>> + * @b Example
>> + *
>> + * Below is an example of the use of an lcore variable:
>> + *
>> + * \code{.c}
>> + * struct foo_lcore_state {
>> + *         int a;
>> + *         long b;
>> + * };
>> + *
>> + * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
>> + *
>> + * long foo_get_a_plus_b(void)
>> + * {
>> + *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
>> + *
>> + *         return state->a + state->b;
>> + * }
>> + *
>> + * RTE_INIT(rte_foo_init)
>> + * {
>> + *         unsigned int lcore_id;
> 
> This variable is part of RTE_LCORE_VAR_FOREACH_VALUE(), and can be removed from here.
> 
>> + *
>> + *         RTE_LCORE_VAR_ALLOC(foo_state);
> 
> Typo: foo_state -> lcore_states
> 

Will fix.

>> + *
>> + *         struct foo_lcore_state *state;
>> + *         RTE_LCORE_VAR_FOREACH_VALUE(lcore_states) {
> 
> Typo:
> RTE_LCORE_VAR_FOREACH_VALUE(lcore_states)
> ->
> RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states)
> 

Will fix.

>> + *                 (initialize 'state')
>> + *         }
>> + *
>> + *         (other initialization)
>> + * }
>> + * \endcode
>> + *
>> + *
>> + * @b Alternatives
>> + *
>> + * Lcore variables are designed to replace a pattern exemplified below:
>> + * \code{.c}
>> + * struct foo_lcore_state {
>> + *         int a;
>> + *         long b;
>> + *         RTE_CACHE_GUARD;
>> + * } __rte_cache_aligned;
>> + *
>> + * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
>> + * \endcode
>> + *
>> + * This scheme is simple and effective, but has one drawback: the data
>> + * is organized so that objects related to all lcores for a particular
>> + * module is kept close in memory. At a bare minimum, this forces the
>> + * use of cache-line alignment to avoid false sharing. With CPU
>> + * hardware prefetching and memory loads resulting from speculative
>> + * execution (functions which seemingly are getting more eager faster
>> + * than they are getting more intelligent), one or more "guard" cache
>> + * lines may be required to separate one lcore's data from another's.
>> + *
>> + * Lcore variables has the upside of working with, not against, the
>> + * CPU's assumptions and for example next-line prefetchers may well
>> + * work the way its designers intended (i.e., to the benefit, not
>> + * detriment, of system performance).
>> + *
>> + * Another alternative to \ref rte_lcore_var.h is the \ref
>> + * rte_per_lcore.h API, which make use of thread-local storage (TLS,
>> + * e.g., GCC __thread or C11 _Thread_local). The main differences
>> + * between by using the various forms of TLS (e.g., \ref
>> + * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
>> + * variables are:
>> + *
>> + *   * The existence and non-existence of a thread-local variable
>> + *     instance follow that of particular thread's. The data cannot be
>> + *     accessed before the thread has been created, nor after it has
>> + *     exited. One effect of this is thread-local variables must
>> + *     initialized in a "lazy" manner (e.g., at the point of thread
>> + *     creation). Lcore variables may be accessed immediately after
>> + *     having been allocated (which is usually prior any thread beyond
>> + *     the main thread is running).
>> + *   * A thread-local variable is duplicated across all threads in the
>> + *     process, including unregistered non-EAL threads (i.e.,
>> + *     "regular" threads). For DPDK applications heavily relying on
>> + *     multi-threading (in conjunction to DPDK's "one thread per core"
>> + *     pattern), either by having many concurrent threads or
>> + *     creating/destroying threads at a high rate, an excessive use of
>> + *     thread-local variables may cause inefficiencies (e.g.,
>> + *     increased thread creation overhead due to thread-local storage
>> + *     initialization or increased total RAM footprint usage). Lcore
>> + *     variables *only* exist for threads with an lcore id, and thus
>> + *     not for such "regular" threads.
>> + *   * If data in thread-local storage may be shared between threads
>> + *     (i.e., can a pointer to a thread-local variable be passed to
>> + *     and successfully dereferenced by non-owning thread) depends on
>> + *     the details of the TLS implementation. With GCC __thread and
>> + *     GCC _Thread_local, such data sharing is supported. In the C11
>> + *     standard, the result of accessing another thread's
>> + *     _Thread_local object is implementation-defined. Lcore variable
>> + *     instances may be accessed reliably by any thread.
>> + */
>> +
>> +#ifdef __cplusplus
>> +extern "C" {
>> +#endif
>> +
>> +#include <stddef.h>
>> +#include <stdalign.h>
>> +
>> +#include <rte_common.h>
>> +#include <rte_config.h>
>> +#include <rte_lcore.h>
>> +
>> +/**
>> + * Given the lcore variable type, produces the type of the lcore
>> + * variable handle.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
>> +	type *
> 
> This macro seems superfluous.
> In RTE_LCORE_VAR_HANDLE(type, name) just use:
>   type * name
> Are there other use cases for it?
> 

It's just a marker, like RTE_LCORE_VAR_HANDLE(), to indicate this is not 
your average pointer type.

It's not obvious these marker macros make things more clear. One could 
just say in the API docs that lcore handles are opaque pointers to the 
lcore variable's type, and make clear they may only be dereferenced 
through the provided macros.

>> +
>> +/**
>> + * Define a lcore variable handle.
>> + *
>> + * This macro defines a variable which is used as a handle to access
>> + * the various per-lcore id instances of a per-lcore id variable.
>> + *
>> + * The aim with this macro is to make clear at the point of
>> + * declaration that this is an lcore handler, rather than a regular
>> + * pointer.
>> + *
>> + * Add @b static as a prefix in case the lcore variable are only to be
>> + * accessed from a particular translation unit.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
>> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> 
> Thinking out loud here...
> Consider if this name should be more similar with RTE_DEFINE_PER_LCORE(type, name), e.g. RTE_DEFINE_LCORE_VAR(type, name) or RTE_LCORE_VAR_DEFINE(type, name).
> Using the common prefix RTE_LCORE_VAR is preferable.
> Using the term "handle" indicates that it is opaque and needs to be allocated by an allocation function.
> On the other hand, the "handle" is not unique per thread, so it's nor really a "handle".
> 

It's a handle to a variable, not a handle to a particular instance of 
its values.

>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align)	\
>> +	name = rte_lcore_var_alloc(size, align)
>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle,
>> + * with values aligned for any type of object.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
>> +	name = rte_lcore_var_alloc(size, 0)
>> +
>> +/**
>> + * Allocate space for an lcore variable of the size and alignment
>> requirements
>> + * suggested by the handler pointer type, and initialize its handle.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC(name)					\
>> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, sizeof(*(name)),		\
>> +				       alignof(typeof(*(name))))
>> +
>> +/**
>> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
>> + * means of a \ref RTE_INIT constructor.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
>> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
>> +	{								\
>> +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
>> +	}
>> +
>> +/**
>> + * Allocate an explicitly-sized lcore variable by means of a \ref
>> + * RTE_INIT constructor.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
>> +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
>> +
>> +/**
>> + * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
>> + */
>> +#define RTE_LCORE_VAR_INIT(name)					\
>> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
>> +	{								\
>> +		RTE_LCORE_VAR_ALLOC(name);				\
>> +	}
>> +
>> +#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)		\
>> +	((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))
> 
> This macro also seems superfluous.
> Doesn't RTE_LCORE_VAR_LCORE_PTR() suffice?
> 

It's just functional decomposition (but for macros). To make the whole 
thing a little more readable.

Maybe I should change "name" to "handle" in this and other instances 
(e.g., RTE_LCORE_VAR_LCORE_PTR).

>> +
>> +/**
>> + * Get pointer to lcore variable instance with the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)				\
>> +	((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> 
> This uses type casting.
> I wonder if additional build-time type checking would be possible...
> Nice to have: The compiler should fail if name is not a pointer, but a struct or an uint64_t, or even an uintptr_t.
> 
There is no way to compared the type of the lcore variable (at the point 
of declaration) with the type of the handle pointer at the point of 
handle "dereferencing" (which is essentially is what this macro does).

You can't cast a struct to a pointer. You could assure it's a pointer by 
replacing the __RTE_LCORE_VAR_LCORE_PTR() with

static inline __rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
{
	return (void *)&rte_lcore_var[lcore_id][(uintptr_t)handle];
}

(Bad practice to use a macro when a function can do the job anyway.)

Maybe this function shouldn't even have the "__" prefix. Could well be 
valid uses cases when you want void * typed access to a lcore variable 
value.

I'll use a function in the next RFC version.

>> +
>> +/**
>> + * Get value of a lcore variable instance of the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)		\
>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))
> 
> The four accessor functions, RTE_LCORE_VAR[_LCORE]_GET/SET(), seem superfluous.
> They make the API seem more complex than just using RTE_LCORE_VAR[_LCORE]_PTR() for access.
> 

They are (somewhat) useful when the value is a primitive type.

RTE_LCORE_VAR_SET(my_int, 17);

versus

*RTE_LCORE_VAR_PTR(my_int) = 17;

Former is slightly more readable, imo, but I agree with you that these 
macros do clutter up the API.

>> +
>> +/**
>> + * Set the value of a lcore variable instance of the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)		\
>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
>> +
>> +/**
>> + * Get pointer to lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
>> +
>> +/**
>> + * Get value of lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
>> +
>> +/**
>> + * Set value of lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_SET(name, value) \
>> +	RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
>> +
>> +/**
>> + * Iterate over each lcore id's value for a lcore variable.
>> + */
>> +#define RTE_LCORE_VAR_FOREACH_VALUE(var, name)				\
>> +	for (unsigned int lcore_id =					\
>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
>> +	     lcore_id < RTE_MAX_LCORE;					\
>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> 
> RTE_LCORE_VAR_FOREACH_PTR(ptr, name) would be an even better name; considering that "var" is really a pointer.
> 

No, it's for each value, referenced via the pointer.

RTE_LCORE_VAR_FOREACH_VALUE_PTR() is too long.

I'll change "var" -> "ptr".

> I also wonder about build-time type checking here...
> Nice to have: The compiler should fail if "ptr" is not a pointer.
> 

I agree.

>> +
>> +extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
>> +
>> +/**
>> + * Allocate space in the per-lcore id buffers for a lcore variable.
>> + *
>> + * The pointer returned is only an opaque identifer of the variable. To
>> + * get an actual pointer to a particular instance of the variable use
>> + * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
>> + *
>> + * The allocation is always successful, barring a fatal exhaustion of
>> + * the per-lcore id buffer space.
>> + *
>> + * @param size
>> + *   The size (in bytes) of the variable's per-lcore id value.
>> + * @param align
>> + *   If 0, the values will be suitably aligned for any kind of type
>> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
>> + *   on a multiple of *align*, which must be a power of 2 and equal or
>> + *   less than \c RTE_CACHE_LINE_SIZE.
>> + * @return
>> + *   The id of the variable, stored in a void pointer value.
>> + */
>> +__rte_experimental
>> +void *
>> +rte_lcore_var_alloc(size_t size, size_t align);
>> +
>> +#ifdef __cplusplus
>> +}
>> +#endif
>> +
>> +#endif /* _RTE_LCORE_VAR_H_ */
>> diff --git a/lib/eal/version.map b/lib/eal/version.map
>> index 5e0cd47c82..e90b86115a 100644
>> --- a/lib/eal/version.map
>> +++ b/lib/eal/version.map
>> @@ -393,6 +393,10 @@ EXPERIMENTAL {
>>   	# added in 23.07
>>   	rte_memzone_max_get;
>>   	rte_memzone_max_set;
>> +
>> +	# added in 24.03
>> +	rte_lcore_var_alloc;
>> +	rte_lcore_var;
>>   };
>>
>>   INTERNAL {
>> --
>> 2.34.1
> 
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v3 5/6] service: keep per-lcore state in lcore variable
  2024-02-22  9:42           ` Morten Brørup
@ 2024-02-23 10:19             ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-23 10:19 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-22 10:42, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Tuesday, 20 February 2024 09.49
>>
>> Replace static array of cache-aligned structs with an lcore variable,
>> to slightly benefit code simplicity and performance.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
> 
> 
>> @@ -486,8 +489,7 @@ service_runner_func(void *arg)
>>   {
>>   	RTE_SET_USED(arg);
>>   	uint8_t i;
>> -	const int lcore = rte_lcore_id();
>> -	struct core_state *cs = &lcore_states[lcore];
>> +	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
> 
> Typo: TAB -> SPACE.
> 

Will fix.

>>
>>   	rte_atomic_store_explicit(&cs->thread_active, 1,
>> rte_memory_order_seq_cst);
>>
>> @@ -533,13 +535,16 @@ service_runner_func(void *arg)
>>   int32_t
>>   rte_service_lcore_may_be_active(uint32_t lcore)
>>   {
>> -	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
>> +	struct core_state *cs =
>> +		RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
>> +
>> +	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
>>   		return -EINVAL;
> 
> This comment is mostly related to patch 1 in the series...
> 
> You are setting cs = RTE_LCORE_VAR_LCORE_PTR(lcore, ...) before validating that lcore < RTE_MAX_LCORE. I wondered if that potentially was an overrun bug.
> 
> It is obvious when looking at the RTE_LCORE_VAR_LCORE_PTR() macro implementation, but perhaps its description could mention that it is safe to use with an "invalid" lcore_id, but not dereferencing it.
> 

I thought about adding something equivalent to an RTE_ASSERT() on 
lcore_id in the dereferencing macros, but then I thought that maybe it 
is a valid use case to pass invalid lcore ids.

Invalid ids being OK or not, I think the above code should do "cs = 
/../" *after* the lcore id check. Now it looks strange and force the 
reader to consider if this is valid or not, for no good reason.

The lcore variable API docs should probably explicitly allow invalid 
core id in the macros.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v4 0/6] Lcore variables
  2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                             ` (2 preceding siblings ...)
  2024-02-22  9:22           ` Morten Brørup
@ 2024-02-25 15:03           ` Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                               ` (5 more replies)
  3 siblings, 6 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 15:03 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

One thing is unclear to the author is how this API relates to
potential future per-lcore dynamic allocator (e.g., a per-lcore heap).

Contrary to what the version.map edit suggests, this RFC is not meant
for a proposal for DPDK 24.03.

Mattias Rönnblom (6):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 app/test/meson.build                  |   1 +
 app/test/test_lcore_var.c             | 439 ++++++++++++++++++++++++++
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  68 ++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/common/rte_random.c           |  30 +-
 lib/eal/common/rte_service.c          | 120 ++++---
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 lib/eal/x86/rte_power_intrinsics.c    |  17 +-
 lib/power/rte_power_pmd_mgmt.c        |  36 +--
 13 files changed, 1006 insertions(+), 88 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v4 1/6] eal: add static per-lcore memory allocation facility
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
@ 2024-02-25 15:03             ` Mattias Rönnblom
  2024-02-27  9:58               ` Morten Brørup
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 2/6] eal: add lcore variable test suite Mattias Rönnblom
                               ` (4 subsequent siblings)
  5 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 15:03 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small chunks of often-used data, which is related logically, but where
there are performance benefits to reap from having updates being local
to an lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  68 +++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 375 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 7 files changed, 451 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/config/rte_config.h b/config/rte_config.h
index d743a5c3d3..0dac33d3b9 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 8c1eb8fafa..a3b8391570 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..5c353ebd46
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..09a7c7d4f6
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,375 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. In other words,
+ * there's one copy of its value for each and every current and future
+ * lcore id-equipped thread, with the total number of copies amounting
+ * to \c RTE_MAX_LCORE.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for \c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. A handle may be passed between modules and
+ * threads just like any pointer, but its value is not the address of
+ * any particular object, but rather just an opaque identifier, stored
+ * in a typed pointer (to inform the access macro the type of values).
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
+ *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but is should
+ * generally only *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids *may* be frequently read or written by the owners without the
+ * risk of false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomics) should
+ * employed to assure there are no data races between the owning
+ * thread and any non-owner threads accessing the same lcore variable
+ * instance.
+ *
+ * The value of the lcore variable for a particular lcore id may be
+ * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
+ * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * To modify the value of an lcore variable for a particular lcore id,
+ * either access the object through the pointer retrieved by \ref
+ * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
+ * RTE_LCORE_VAR_LCORE_SET.
+ *
+ * The access macros each has a short-hand which may be used by an EAL
+ * thread or registered non-EAL thread to access the lcore variable
+ * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
+ * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
+ *
+ * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier. The
+ * *identifier* value is common across all lcore ids.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like \c int,
+ * but would more typically be a \c struct. An application may choose
+ * to define an lcore variable, which it then it goes on to never
+ * allocate.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * The size of a lcore variable's value must be less than the DPDK
+ * build-time constant \c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be \ref __rte_cache_aligned
+ * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, all nearby data structures
+ * should almost-always be written to by a single thread (the lcore
+ * variable owner). Adding padding will increase the effective memory
+ * working set size, and potentially reducing performance.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * \endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * } __rte_cache_aligned;
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * \endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to \ref rte_lcore_var.h is the \ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., \ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. One effect of this is thread-local variables must
+ *     initialized in a "lazy" manner (e.g., at the point of thread
+ *     creation). Lcore variables may be accessed immediately after
+ *     having been allocated (which is usually prior any thread beyond
+ *     the main thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id, and thus
+ *     not for such "regular" threads.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handler pointer type, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a \ref
+ * RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+static inline void *
+__rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)			\
+	((typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_GET(lcore_id, handle)	\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)))
+
+/**
+ * Set the value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_SET(lcore_id, handle, value)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)) = (value))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_PTR(handle) \
+	RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), handle)
+
+/**
+ * Get value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_GET(handle) \
+	RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), handle)
+
+/**
+ * Set value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_SET(handle, value) \
+	RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), handle, value)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(var, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, handle)), 0);	\
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than \c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The id of the variable, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 5e0cd47c82..e90b86115a 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -393,6 +393,10 @@ EXPERIMENTAL {
 	# added in 23.07
 	rte_memzone_max_get;
 	rte_memzone_max_set;
+
+	# added in 24.03
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v4 2/6] eal: add lcore variable test suite
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-25 15:03             ` Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
                               ` (3 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 15:03 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Add test suite to exercise the <rte_lcore_var.h> API.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.
RFC v2:
 * Improve alignment-related test coverage.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 439 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 440 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 7d909039ae..846affa98c 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..d24403b0f7
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,439 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static bool
+rand_bool(void)
+{
+	return rte_rand() & 1;
+}
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_PTR(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal;
+
+	if (rand_bool())
+		equal = RTE_LCORE_VAR_GET(test_int) == state->old_value;
+	else
+		equal = *(RTE_LCORE_VAR_PTR(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	if (rand_bool())
+		RTE_LCORE_VAR_SET(test_int, state->new_value);
+	else
+		*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		RTE_LCORE_VAR_LCORE_SET(lcore_id, test_int, state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		TEST_ASSERT_EQUAL(state->new_value,
+				  RTE_LCORE_VAR_LCORE_GET(lcore_id, test_int),
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_PTR(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_struct);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_PTR(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(RTE_LCORE_VAR_LCORE_GET(lcore_id, test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before = RTE_LCORE_VAR_LCORE_GET(lcore_id, before_array);
+		char after = RTE_LCORE_VAR_LCORE_GET(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_PTR(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = RTE_LCORE_VAR_LCORE_GET(lcore_id,
+							     handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_PTR(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_PTR(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v4 3/6] random: keep PRNG state in lcore variable
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 2/6] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-02-25 15:03             ` Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 4/6] power: keep per-lcore " Mattias Rönnblom
                               ` (2 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 15:03 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/rte_random.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 7709b8f2c6..adbbf13f0e 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct rte_rand_state {
@@ -19,14 +20,12 @@ struct rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
-} __rte_cache_aligned;
+};
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_PTR(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_PTR(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v4 4/6] power: keep per-lcore state in lcore variable
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
                               ` (2 preceding siblings ...)
  2024-02-25 15:03             ` [RFC v4 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-02-25 15:03             ` Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 5/6] service: " Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 15:03 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

RFC v3:
 * Replace for loop with FOREACH macro.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/power/rte_power_pmd_mgmt.c | 36 ++++++++++++++++------------------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 591fc69f36..ea30454895 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -68,8 +69,8 @@ struct pmd_core_cfg {
 	/**< Number of queues ready to enter power optimized state */
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
-} __rte_cache_aligned;
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+};
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_PTR(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_PTR(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v4 5/6] service: keep per-lcore state in lcore variable
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
                               ` (3 preceding siblings ...)
  2024-02-25 15:03             ` [RFC v4 4/6] power: keep per-lcore " Mattias Rönnblom
@ 2024-02-25 15:03             ` Mattias Rönnblom
  2024-02-25 16:28               ` Mattias Rönnblom
  2024-02-25 15:03             ` [RFC v4 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 15:03 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/common/rte_service.c | 120 ++++++++++++++++++++---------------
 1 file changed, 69 insertions(+), 51 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d959c91459..7fbae704ed 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,11 +102,12 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
+	else {
+		struct core_state *cs;
+		RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+			memset(cs, 0, sizeof(struct core_state));
 	}
 
 	int i;
@@ -122,7 +124,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +137,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +286,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +293,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +454,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +467,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +489,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_PTR(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +535,17 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs;
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
+	cs = RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +553,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +574,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +591,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +643,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +695,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +713,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +738,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +762,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +786,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +816,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +825,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +850,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +861,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +869,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +877,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +886,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +902,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +949,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +978,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +990,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1029,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_PTR(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v4 6/6] eal: keep per-lcore power intrinsics state in lcore variable
  2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
                               ` (4 preceding siblings ...)
  2024-02-25 15:03             ` [RFC v4 5/6] service: " Mattias Rönnblom
@ 2024-02-25 15:03             ` Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 15:03 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 532a2e646b..f4659af77e 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -4,6 +4,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -12,10 +13,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} __rte_cache_aligned wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -170,7 +175,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_PTR(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -262,7 +267,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_PTR(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -301,8 +306,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_PTR(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v4 5/6] service: keep per-lcore state in lcore variable
  2024-02-25 15:03             ` [RFC v4 5/6] service: " Mattias Rönnblom
@ 2024-02-25 16:28               ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-25 16:28 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: Morten Brørup, Stephen Hemminger

On 2024-02-25 16:03, Mattias Rönnblom wrote:
> Replace static array of cache-aligned structs with an lcore variable,
> to slightly benefit code simplicity and performance.
> 
> RFC v4:
>   * Remove strange-looking lcore value lookup potentially containing
>     invalid lcore id. (Morten Brørup)
>   * Replace misplaced tab with space. (Morten Brørup)
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---
>   lib/eal/common/rte_service.c | 120 ++++++++++++++++++++---------------
>   1 file changed, 69 insertions(+), 51 deletions(-)
> 
> diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
> index d959c91459..7fbae704ed 100644
> --- a/lib/eal/common/rte_service.c
> +++ b/lib/eal/common/rte_service.c
> @@ -11,6 +11,7 @@
>   
>   #include <eal_trace_internal.h>
>   #include <rte_lcore.h>
> +#include <rte_lcore_var.h>
>   #include <rte_branch_prediction.h>
>   #include <rte_common.h>
>   #include <rte_cycles.h>
> @@ -75,7 +76,7 @@ struct core_state {
>   
>   static uint32_t rte_service_count;
>   static struct rte_service_spec_impl *rte_services;
> -static struct core_state *lcore_states;
> +static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
>   static uint32_t rte_service_library_initialized;
>   
>   int32_t
> @@ -101,11 +102,12 @@ rte_service_init(void)
>   		goto fail_mem;
>   	}
>   
> -	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
> -			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
> -	if (!lcore_states) {
> -		EAL_LOG(ERR, "error allocating core states array");
> -		goto fail_mem;
> +	if (lcore_states == NULL)
> +		RTE_LCORE_VAR_ALLOC(lcore_states);
> +	else {
> +		struct core_state *cs;
> +		RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
> +			memset(cs, 0, sizeof(struct core_state));
>   	}
>   
>   	int i;
> @@ -122,7 +124,6 @@ rte_service_init(void)
>   	return 0;
>   fail_mem:
>   	rte_free(rte_services);
> -	rte_free(lcore_states);
>   	return -ENOMEM;
>   }
>   
> @@ -136,7 +137,6 @@ rte_service_finalize(void)
>   	rte_eal_mp_wait_lcore();
>   
>   	rte_free(rte_services);
> -	rte_free(lcore_states);
>   
>   	rte_service_library_initialized = 0;
>   }
> @@ -286,7 +286,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
>   int32_t
>   rte_service_component_unregister(uint32_t id)
>   {
> -	uint32_t i;
>   	struct rte_service_spec_impl *s;
>   	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
>   
> @@ -294,9 +293,10 @@ rte_service_component_unregister(uint32_t id)
>   
>   	s->internal_flags &= ~(SERVICE_F_REGISTERED);
>   
> +	struct core_state *cs;
>   	/* clear the run-bit in all cores */
> -	for (i = 0; i < RTE_MAX_LCORE; i++)
> -		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
> +	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
> +		cs->service_mask &= ~(UINT64_C(1) << id);
>   
>   	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
>   
> @@ -454,7 +454,10 @@ rte_service_may_be_active(uint32_t id)
>   		return -EINVAL;
>   
>   	for (i = 0; i < lcore_count; i++) {
> -		if (lcore_states[ids[i]].service_active_on_lcore[id])
> +		struct core_state *cs =
> +			RTE_LCORE_VAR_LCORE_PTR(ids[i], lcore_states);
> +
> +		if (cs->service_active_on_lcore[id])
>   			return 1;
>   	}
>   
> @@ -464,7 +467,7 @@ rte_service_may_be_active(uint32_t id)
>   int32_t
>   rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
>   {
> -	struct core_state *cs = &lcore_states[rte_lcore_id()];
> +	struct core_state *cs =	RTE_LCORE_VAR_PTR(lcore_states);
>   	struct rte_service_spec_impl *s;
>   
>   	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
> @@ -486,8 +489,7 @@ service_runner_func(void *arg)
>   {
>   	RTE_SET_USED(arg);
>   	uint8_t i;
> -	const int lcore = rte_lcore_id();
> -	struct core_state *cs = &lcore_states[lcore];
> +	struct core_state *cs = RTE_LCORE_VAR_PTR(lcore_states);
>   
>   	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
>   
> @@ -533,13 +535,17 @@ service_runner_func(void *arg)
>   int32_t
>   rte_service_lcore_may_be_active(uint32_t lcore)
>   {
> -	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
> +	struct core_state *cs;
> +
> +	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)

This doesn't work, since 'cs' is not yet initialized. I'll fix it v5.

<snip>

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v4 1/6] eal: add static per-lcore memory allocation facility
  2024-02-25 15:03             ` [RFC v4 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-27  9:58               ` Morten Brørup
  2024-02-27 13:44                 ` Mattias Rönnblom
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-27  9:58 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Stephen Hemminger

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Sunday, 25 February 2024 16.03

[...]

> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +	void *handle;
> +	void *value;
> +
> +	offset = RTE_ALIGN_CEIL(offset, align);
> +
> +	if (offset + size > RTE_MAX_LCORE_VAR) {

This would be the usual comparison:
if (lcore_buffer == NULL) {

> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> +					     LCORE_BUFFER_SIZE);
> +		RTE_VERIFY(lcore_buffer != NULL);
> +
> +		offset = 0;
> +	}

[...]

> +/**
> + * Define a lcore variable handle.
> + *
> + * This macro defines a variable which is used as a handle to access
> + * the various per-lcore id instances of a per-lcore id variable.
> + *
> + * The aim with this macro is to make clear at the point of
> + * declaration that this is an lcore handler, rather than a regular
> + * pointer.
> + *
> + * Add @b static as a prefix in case the lcore variable are only to be
> + * accessed from a particular translation unit.
> + */
> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> +

The parameter is "name" here, and "handle" in other macros.
Just mentioning to make sure you thought about it.

> +/**
> + * Get pointer to lcore variable instance with the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)			\
> +	((typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))
> +
> +/**
> + * Get value of a lcore variable instance of the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, handle)	\
> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)))
> +
> +/**
> + * Set the value of a lcore variable instance of the specified lcore id.
> + */
> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, handle, value)		\
> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)) = (value))

I still think RTE_LCORE_VAR[_LCORE]_PTR() suffice, and RTE_LCORE_VAR[_LCORE]_GET/SET are superfluous.
But I don't insist on their removal. :-)

With or without suggested changes...

For the series,
Acked-by: Morten Brørup <mb@smartsharesystems.com>


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v4 1/6] eal: add static per-lcore memory allocation facility
  2024-02-27  9:58               ` Morten Brørup
@ 2024-02-27 13:44                 ` Mattias Rönnblom
  2024-02-27 15:05                   ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-27 13:44 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-27 10:58, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Sunday, 25 February 2024 16.03
> 
> [...]
> 
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	void *handle;
>> +	void *value;
>> +
>> +	offset = RTE_ALIGN_CEIL(offset, align);
>> +
>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> 
> This would be the usual comparison:
> if (lcore_buffer == NULL) {
> 
>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>> +					     LCORE_BUFFER_SIZE);
>> +		RTE_VERIFY(lcore_buffer != NULL);
>> +
>> +		offset = 0;
>> +	}
> 
> [...]
> 
>> +/**
>> + * Define a lcore variable handle.
>> + *
>> + * This macro defines a variable which is used as a handle to access
>> + * the various per-lcore id instances of a per-lcore id variable.
>> + *
>> + * The aim with this macro is to make clear at the point of
>> + * declaration that this is an lcore handler, rather than a regular
>> + * pointer.
>> + *
>> + * Add @b static as a prefix in case the lcore variable are only to be
>> + * accessed from a particular translation unit.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
>> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
>> +
> 
> The parameter is "name" here, and "handle" in other macros.
> Just mentioning to make sure you thought about it.
> 
>> +/**
>> + * Get pointer to lcore variable instance with the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)			\
>> +	((typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))
>> +
>> +/**
>> + * Get value of a lcore variable instance of the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, handle)	\
>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)))
>> +
>> +/**
>> + * Set the value of a lcore variable instance of the specified lcore id.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, handle, value)		\
>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)) = (value))
> 
> I still think RTE_LCORE_VAR[_LCORE]_PTR() suffice, and RTE_LCORE_VAR[_LCORE]_GET/SET are superfluous.
> But I don't insist on their removal. :-)
> 

I'll remove them. One can always add them later. Nothing I've seen in 
the DPDK code base so far has been called for their use.

Should the RTE_LCORE_VAR_PTR() be renamed RTE_LCORE_VAR_VALUE() (and 
still return a pointer, obviously)? "PTR" seems a little superfluous 
(Hungarian). "RTE_LCORE_VAR()" would be short, but not very descriptive.

> With or without suggested changes...
> 
> For the series,
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 

Thanks for all help.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v4 1/6] eal: add static per-lcore memory allocation facility
  2024-02-27 13:44                 ` Mattias Rönnblom
@ 2024-02-27 15:05                   ` Morten Brørup
  2024-02-27 16:27                     ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-02-27 15:05 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Tuesday, 27 February 2024 14.44
> 
> On 2024-02-27 10:58, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Sunday, 25 February 2024 16.03
> >
> > [...]
> >
> >> +static void *
> >> +lcore_var_alloc(size_t size, size_t align)
> >> +{
> >> +	void *handle;
> >> +	void *value;
> >> +
> >> +	offset = RTE_ALIGN_CEIL(offset, align);
> >> +
> >> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> >
> > This would be the usual comparison:
> > if (lcore_buffer == NULL) {
> >
> >> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >> +					     LCORE_BUFFER_SIZE);
> >> +		RTE_VERIFY(lcore_buffer != NULL);
> >> +
> >> +		offset = 0;
> >> +	}
> >
> > [...]
> >
> >> +/**
> >> + * Define a lcore variable handle.
> >> + *
> >> + * This macro defines a variable which is used as a handle to access
> >> + * the various per-lcore id instances of a per-lcore id variable.
> >> + *
> >> + * The aim with this macro is to make clear at the point of
> >> + * declaration that this is an lcore handler, rather than a regular
> >> + * pointer.
> >> + *
> >> + * Add @b static as a prefix in case the lcore variable are only to
> be
> >> + * accessed from a particular translation unit.
> >> + */
> >> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> >> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> >> +
> >
> > The parameter is "name" here, and "handle" in other macros.
> > Just mentioning to make sure you thought about it.
> >
> >> +/**
> >> + * Get pointer to lcore variable instance with the specified lcore
> id.
> >> + */
> >> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)			\
> >> +	((typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))
> >> +
> >> +/**
> >> + * Get value of a lcore variable instance of the specified lcore id.
> >> + */
> >> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, handle)	\
> >> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)))
> >> +
> >> +/**
> >> + * Set the value of a lcore variable instance of the specified lcore
> id.
> >> + */
> >> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, handle, value)		\
> >> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)) = (value))
> >
> > I still think RTE_LCORE_VAR[_LCORE]_PTR() suffice, and
> RTE_LCORE_VAR[_LCORE]_GET/SET are superfluous.
> > But I don't insist on their removal. :-)
> >
> 
> I'll remove them. One can always add them later. Nothing I've seen in
> the DPDK code base so far has been called for their use.
> 
> Should the RTE_LCORE_VAR_PTR() be renamed RTE_LCORE_VAR_VALUE() (and
> still return a pointer, obviously)? "PTR" seems a little superfluous
> (Hungarian). "RTE_LCORE_VAR()" would be short, but not very descriptive.

Good question...

I would try to align this name and the name of the associated foreach macro, currently RTE_LCORE_VAR_FOREACH_VALUE(var, handle).

It seems confusing to have a macro named _VALUE() returning a pointer.
(Which is why I also dislike the foreach macro's current name and "var" parameter name.)

If it is supposed to be frequently used, a shorter name is preferable.
Which leans towards RTE_LCORE_VAR().

And then RTE_FOREACH_LCORE_VAR(iterator, handle) or RTE_LCORE_VAR_FOREACH(iterator, handle).

But then it is not obvious from the name that they operate on pointers.
We don't use Hungarian style in DPDK, so perhaps that is acceptable.


Your conclusion that GET/SET are not generally required inspired me for another idea...
Maybe returning a pointer is not the right thing to do!

I wonder if there are any obstacles to generally dereferencing the lcore variable pointer, like this:

#define RTE_LCORE_VAR_LCORE(lcore_id, handle) \
	(*(typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))

It would work for both get and set:
RTE_LCORE_VAR(foo) = RTE_LCORE_VAR(bar);

And also for functions being passed the address of the variable.
E.g. memset(&RTE_LCORE_VAR(foo), ...) would expand to:
memset(&(*(typeof(foo))__rte_lcore_var_lcore_ptr(rte_lcore_id(), foo)), ...);


One more thought, not related to the above discussion:

The TLS per-lcore variables are built with "per_lcore_" prefix added to the names, like this:
#define RTE_DEFINE_PER_LCORE(type, name) \
	__thread __typeof__(type) per_lcore_##name

Should the lcore variables have something similar, i.e.:
#define RTE_LCORE_VAR_HANDLE(type, name) \
	RTE_LCORE_VAR_HANDLE_TYPE(type) lcore_var_##name


> 
> > With or without suggested changes...
> >
> > For the series,
> > Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >
> 
> Thanks for all help.

Thank you for the detailed consideration of my feedback.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v4 1/6] eal: add static per-lcore memory allocation facility
  2024-02-27 15:05                   ` Morten Brørup
@ 2024-02-27 16:27                     ` Mattias Rönnblom
  2024-02-27 16:51                       ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-27 16:27 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

On 2024-02-27 16:05, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>> Sent: Tuesday, 27 February 2024 14.44
>>
>> On 2024-02-27 10:58, Morten Brørup wrote:
>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>>>> Sent: Sunday, 25 February 2024 16.03
>>>
>>> [...]
>>>
>>>> +static void *
>>>> +lcore_var_alloc(size_t size, size_t align)
>>>> +{
>>>> +	void *handle;
>>>> +	void *value;
>>>> +
>>>> +	offset = RTE_ALIGN_CEIL(offset, align);
>>>> +
>>>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
>>>
>>> This would be the usual comparison:
>>> if (lcore_buffer == NULL) {
>>>
>>>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>>>> +					     LCORE_BUFFER_SIZE);
>>>> +		RTE_VERIFY(lcore_buffer != NULL);
>>>> +
>>>> +		offset = 0;
>>>> +	}
>>>
>>> [...]
>>>
>>>> +/**
>>>> + * Define a lcore variable handle.
>>>> + *
>>>> + * This macro defines a variable which is used as a handle to access
>>>> + * the various per-lcore id instances of a per-lcore id variable.
>>>> + *
>>>> + * The aim with this macro is to make clear at the point of
>>>> + * declaration that this is an lcore handler, rather than a regular
>>>> + * pointer.
>>>> + *
>>>> + * Add @b static as a prefix in case the lcore variable are only to
>> be
>>>> + * accessed from a particular translation unit.
>>>> + */
>>>> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
>>>> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
>>>> +
>>>
>>> The parameter is "name" here, and "handle" in other macros.
>>> Just mentioning to make sure you thought about it.
>>>
>>>> +/**
>>>> + * Get pointer to lcore variable instance with the specified lcore
>> id.
>>>> + */
>>>> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)			\
>>>> +	((typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))
>>>> +
>>>> +/**
>>>> + * Get value of a lcore variable instance of the specified lcore id.
>>>> + */
>>>> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, handle)	\
>>>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)))
>>>> +
>>>> +/**
>>>> + * Set the value of a lcore variable instance of the specified lcore
>> id.
>>>> + */
>>>> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, handle, value)		\
>>>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)) = (value))
>>>
>>> I still think RTE_LCORE_VAR[_LCORE]_PTR() suffice, and
>> RTE_LCORE_VAR[_LCORE]_GET/SET are superfluous.
>>> But I don't insist on their removal. :-)
>>>
>>
>> I'll remove them. One can always add them later. Nothing I've seen in
>> the DPDK code base so far has been called for their use.
>>
>> Should the RTE_LCORE_VAR_PTR() be renamed RTE_LCORE_VAR_VALUE() (and
>> still return a pointer, obviously)? "PTR" seems a little superfluous
>> (Hungarian). "RTE_LCORE_VAR()" would be short, but not very descriptive.
> 
> Good question...
> 
> I would try to align this name and the name of the associated foreach macro, currently RTE_LCORE_VAR_FOREACH_VALUE(var, handle).
> 
> It seems confusing to have a macro named _VALUE() returning a pointer.
> (Which is why I also dislike the foreach macro's current name and "var" parameter name.)
> 

Not sure I agree. In C, you often ask for a value and get a pointer to 
that value. I'll leave it VALUE() for now.

> If it is supposed to be frequently used, a shorter name is preferable.
> Which leans towards RTE_LCORE_VAR().
> 
> And then RTE_FOREACH_LCORE_VAR(iterator, handle) or RTE_LCORE_VAR_FOREACH(iterator, handle).
> 

RTE_LCORE_VAR_FOREACH was the original name, which was changed because 
it was confusingly close to RTE_LCORE_FOREACH(), but had a different 
semantics in regards to which lcore ids are iterated over (EAL threads 
only, versus all lcore ids).

> But then it is not obvious from the name that they operate on pointers.
> We don't use Hungarian style in DPDK, so perhaps that is acceptable.
> 
> 
> Your conclusion that GET/SET are not generally required inspired me for another idea...
> Maybe returning a pointer is not the right thing to do!
> 
> I wonder if there are any obstacles to generally dereferencing the lcore variable pointer, like this:
> 
> #define RTE_LCORE_VAR_LCORE(lcore_id, handle) \
> 	(*(typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))
> 
> It would work for both get and set:
> RTE_LCORE_VAR(foo) = RTE_LCORE_VAR(bar);
> 
> And also for functions being passed the address of the variable.
> E.g. memset(&RTE_LCORE_VAR(foo), ...) would expand to:
> memset(&(*(typeof(foo))__rte_lcore_var_lcore_ptr(rte_lcore_id(), foo)), ...);
> 
> 

The value is usually accessed by means of a pointer, so no need to 
return *pointer.

> One more thought, not related to the above discussion:
> 
> The TLS per-lcore variables are built with "per_lcore_" prefix added to the names, like this:
> #define RTE_DEFINE_PER_LCORE(type, name) \
> 	__thread __typeof__(type) per_lcore_##name
> 
> Should the lcore variables have something similar, i.e.:
> #define RTE_LCORE_VAR_HANDLE(type, name) \
> 	RTE_LCORE_VAR_HANDLE_TYPE(type) lcore_var_##name
> 

I started out with a prefix, but I removed it, since you may want to 
access (copy, assign) the handler pointer directly, and thus need to 
know it's real name. Also, I didn't see why you need a prefix.

For example, consider a section of code where you want to use one of two 
variables depending on condition.

RTE_LCORE_VAR_HANDLE(actual, int);

if (something)
     actual = some_handle;
else
     actual = some_other_handle;

int *value = RTE_LCORE_VAR_VALUE(actual);

This above doesn't work if some_handle is actually named 
rte_lcore_var_some_handle or something like that.

If you want to add a prefix (for which there shouldn't be a need), you 
would need a macro RTE_LCORE_VAR_NAME() as well, so the user can derive 
the actual name (including the prefix).

> 
>>
>>> With or without suggested changes...
>>>
>>> For the series,
>>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>>
>>
>> Thanks for all help.
> 
> Thank you for the detailed consideration of my feedback.
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v4 1/6] eal: add static per-lcore memory allocation facility
  2024-02-27 16:27                     ` Mattias Rönnblom
@ 2024-02-27 16:51                       ` Morten Brørup
  0 siblings, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-02-27 16:51 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev; +Cc: Stephen Hemminger

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Tuesday, 27 February 2024 17.28
> 
> On 2024-02-27 16:05, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >> Sent: Tuesday, 27 February 2024 14.44
> >>
> >> On 2024-02-27 10:58, Morten Brørup wrote:
> >>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >>>> Sent: Sunday, 25 February 2024 16.03
> >>>
> >>> [...]
> >>>
> >>>> +static void *
> >>>> +lcore_var_alloc(size_t size, size_t align)
> >>>> +{
> >>>> +	void *handle;
> >>>> +	void *value;
> >>>> +
> >>>> +	offset = RTE_ALIGN_CEIL(offset, align);
> >>>> +
> >>>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> >>>
> >>> This would be the usual comparison:
> >>> if (lcore_buffer == NULL) {
> >>>
> >>>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >>>> +					     LCORE_BUFFER_SIZE);
> >>>> +		RTE_VERIFY(lcore_buffer != NULL);
> >>>> +
> >>>> +		offset = 0;
> >>>> +	}
> >>>
> >>> [...]
> >>>
> >>>> +/**
> >>>> + * Define a lcore variable handle.
> >>>> + *
> >>>> + * This macro defines a variable which is used as a handle to
> access
> >>>> + * the various per-lcore id instances of a per-lcore id variable.
> >>>> + *
> >>>> + * The aim with this macro is to make clear at the point of
> >>>> + * declaration that this is an lcore handler, rather than a
> regular
> >>>> + * pointer.
> >>>> + *
> >>>> + * Add @b static as a prefix in case the lcore variable are only
> to
> >> be
> >>>> + * accessed from a particular translation unit.
> >>>> + */
> >>>> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> >>>> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> >>>> +
> >>>
> >>> The parameter is "name" here, and "handle" in other macros.
> >>> Just mentioning to make sure you thought about it.
> >>>
> >>>> +/**
> >>>> + * Get pointer to lcore variable instance with the specified lcore
> >> id.
> >>>> + */
> >>>> +#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)
> 	\
> >>>> +	((typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id,
> handle))
> >>>> +
> >>>> +/**
> >>>> + * Get value of a lcore variable instance of the specified lcore
> id.
> >>>> + */
> >>>> +#define RTE_LCORE_VAR_LCORE_GET(lcore_id, handle)	\
> >>>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)))
> >>>> +
> >>>> +/**
> >>>> + * Set the value of a lcore variable instance of the specified
> lcore
> >> id.
> >>>> + */
> >>>> +#define RTE_LCORE_VAR_LCORE_SET(lcore_id, handle, value)
> 	\
> >>>> +	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, handle)) = (value))
> >>>
> >>> I still think RTE_LCORE_VAR[_LCORE]_PTR() suffice, and
> >> RTE_LCORE_VAR[_LCORE]_GET/SET are superfluous.
> >>> But I don't insist on their removal. :-)
> >>>
> >>
> >> I'll remove them. One can always add them later. Nothing I've seen in
> >> the DPDK code base so far has been called for their use.
> >>
> >> Should the RTE_LCORE_VAR_PTR() be renamed RTE_LCORE_VAR_VALUE() (and
> >> still return a pointer, obviously)? "PTR" seems a little superfluous
> >> (Hungarian). "RTE_LCORE_VAR()" would be short, but not very
> descriptive.
> >
> > Good question...
> >
> > I would try to align this name and the name of the associated foreach
> macro, currently RTE_LCORE_VAR_FOREACH_VALUE(var, handle).
> >
> > It seems confusing to have a macro named _VALUE() returning a pointer.
> > (Which is why I also dislike the foreach macro's current name and
> "var" parameter name.)
> >
> 
> Not sure I agree. In C, you often ask for a value and get a pointer to
> that value. I'll leave it VALUE() for now.

Yes, fopen() is an example of this.
But such functions don't have VALUE in their names.
(I'm not so worried about the "var" parameter name being confusing.)

You can leave it VALUE for now, just keep an open mind for changing it. :-)

> 
> > If it is supposed to be frequently used, a shorter name is preferable.
> > Which leans towards RTE_LCORE_VAR().
> >
> > And then RTE_FOREACH_LCORE_VAR(iterator, handle) or
> RTE_LCORE_VAR_FOREACH(iterator, handle).
> >
> 
> RTE_LCORE_VAR_FOREACH was the original name, which was changed because
> it was confusingly close to RTE_LCORE_FOREACH(), but had a different
> semantics in regards to which lcore ids are iterated over (EAL threads
> only, versus all lcore ids).

I know I was going in circles here.
Perhaps when we get used to the lcore variables, the similar name might not be confusing anymore. I suppose this happened to me during the review discussions.
I don't have a solid answer, so I'm throwing the ball around to see how it bounces.

> 
> > But then it is not obvious from the name that they operate on
> pointers.
> > We don't use Hungarian style in DPDK, so perhaps that is acceptable.
> >
> >
> > Your conclusion that GET/SET are not generally required inspired me
> for another idea...
> > Maybe returning a pointer is not the right thing to do!
> >
> > I wonder if there are any obstacles to generally dereferencing the
> lcore variable pointer, like this:
> >
> > #define RTE_LCORE_VAR_LCORE(lcore_id, handle) \
> > 	(*(typeof(handle))__rte_lcore_var_lcore_ptr(lcore_id, handle))
> >
> > It would work for both get and set:
> > RTE_LCORE_VAR(foo) = RTE_LCORE_VAR(bar);
> >
> > And also for functions being passed the address of the variable.
> > E.g. memset(&RTE_LCORE_VAR(foo), ...) would expand to:
> > memset(&(*(typeof(foo))__rte_lcore_var_lcore_ptr(rte_lcore_id(),
> foo)), ...);
> >
> >
> 
> The value is usually accessed by means of a pointer, so no need to
> return *pointer.

OK. I suppose you have a pretty good overview of the relevant use cases by now.

> 
> > One more thought, not related to the above discussion:
> >
> > The TLS per-lcore variables are built with "per_lcore_" prefix added
> to the names, like this:
> > #define RTE_DEFINE_PER_LCORE(type, name) \
> > 	__thread __typeof__(type) per_lcore_##name
> >
> > Should the lcore variables have something similar, i.e.:
> > #define RTE_LCORE_VAR_HANDLE(type, name) \
> > 	RTE_LCORE_VAR_HANDLE_TYPE(type) lcore_var_##name
> >
> 
> I started out with a prefix, but I removed it, since you may want to
> access (copy, assign) the handler pointer directly, and thus need to
> know it's real name. Also, I didn't see why you need a prefix.
> 
> For example, consider a section of code where you want to use one of two
> variables depending on condition.
> 
> RTE_LCORE_VAR_HANDLE(actual, int);
> 
> if (something)
>      actual = some_handle;
> else
>      actual = some_other_handle;
> 
> int *value = RTE_LCORE_VAR_VALUE(actual);
> 
> This above doesn't work if some_handle is actually named
> rte_lcore_var_some_handle or something like that.
> 
> If you want to add a prefix (for which there shouldn't be a need), you
> would need a macro RTE_LCORE_VAR_NAME() as well, so the user can derive
> the actual name (including the prefix).

Thanks for the detailed reply.
Let's not add a prefix.

> 
> >
> >>
> >>> With or without suggested changes...
> >>>
> >>> For the series,
> >>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >>>
> >>
> >> Thanks for all help.
> >
> > Thank you for the detailed consideration of my feedback.
> >

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v5 0/6] Lcore variables
  2024-02-25 15:03             ` [RFC v4 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-27  9:58               ` Morten Brørup
@ 2024-02-28 10:09               ` Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                   ` (5 more replies)
  1 sibling, 6 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-28 10:09 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

One thing is unclear to the author is how this API relates to a
potential future per-lcore dynamic allocator (e.g., a per-lcore heap).

Contrary to what the version.map edit suggests, this RFC is not meant
for a proposal for DPDK 24.03.

Mattias Rönnblom (6):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 app/test/meson.build                  |   1 +
 app/test/test_lcore_var.c             | 432 ++++++++++++++++++++++++++
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  68 ++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/common/rte_random.c           |  30 +-
 lib/eal/common/rte_service.c          | 118 ++++---
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 368 ++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 lib/eal/x86/rte_power_intrinsics.c    |  17 +-
 lib/power/rte_power_pmd_mgmt.c        |  36 +--
 13 files changed, 990 insertions(+), 88 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v5 1/6] eal: add static per-lcore memory allocation facility
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
@ 2024-02-28 10:09                 ` Mattias Rönnblom
  2024-03-19 12:52                   ` Konstantin Ananyev
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 2/6] eal: add lcore variable test suite Mattias Rönnblom
                                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-28 10:09 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small chunks of often-used data, which is related logically, but where
there are performance benefits to reap from having updates being local
to an lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  68 +++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 368 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 7 files changed, 444 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/config/rte_config.h b/config/rte_config.h
index d743a5c3d3..0dac33d3b9 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 8c1eb8fafa..a3b8391570 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..5c353ebd46
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..1db479253d
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,368 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. In other words,
+ * there's one copy of its value for each and every current and future
+ * lcore id-equipped thread, with the total number of copies amounting
+ * to @c RTE_MAX_LCORE.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. A handle may be passed between modules and
+ * threads just like any pointer, but its value is not the address of
+ * any particular object, but rather just an opaque identifier, stored
+ * in a typed pointer (to inform the access macro the type of values).
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but is should
+ * generally only *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids *may* be frequently read or written by the owners without the
+ * risk of false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomics) should
+ * employed to assure there are no data races between the owning
+ * thread and any non-owner threads accessing the same lcore variable
+ * instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value, for which a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct. An application may choose
+ * to define an lcore variable, which it then it goes on to never
+ * allocate.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * The size of a lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, all nearby data structures
+ * should almost-always be written to by a single thread (the lcore
+ * variable owner). Adding padding will increase the effective memory
+ * working set size, and potentially reducing performance.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * } __rte_cache_aligned;
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. One effect of this is thread-local variables must
+ *     initialized in a "lazy" manner (e.g., at the point of thread
+ *     creation). Lcore variables may be accessed immediately after
+ *     having been allocated (which is usually prior any thread beyond
+ *     the main thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id, and thus
+ *     not for such "regular" threads.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handler pointer type, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ *
+ * @param value
+ *   A pointer set successivly set to point to lcore variable value
+ *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The id of the variable, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 5e0cd47c82..e90b86115a 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -393,6 +393,10 @@ EXPERIMENTAL {
 	# added in 23.07
 	rte_memzone_max_get;
 	rte_memzone_max_set;
+
+	# added in 24.03
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v5 2/6] eal: add lcore variable test suite
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-02-28 10:09                 ` Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
                                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-28 10:09 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Add test suite to exercise the <rte_lcore_var.h> API.

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 433 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 7d909039ae..846affa98c 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..e07d13460f
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,432 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v5 3/6] random: keep PRNG state in lcore variable
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 2/6] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-02-28 10:09                 ` Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 4/6] power: keep per-lcore " Mattias Rönnblom
                                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-28 10:09 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/rte_random.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 7709b8f2c6..b265660283 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct rte_rand_state {
@@ -19,14 +20,12 @@ struct rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
-} __rte_cache_aligned;
+};
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v5 4/6] power: keep per-lcore state in lcore variable
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
                                   ` (2 preceding siblings ...)
  2024-02-28 10:09                 ` [RFC v5 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-02-28 10:09                 ` Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 5/6] service: " Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-28 10:09 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

RFC v3:
 * Replace for loop with FOREACH macro.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/power/rte_power_pmd_mgmt.c | 36 ++++++++++++++++------------------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 591fc69f36..595c8091e6 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -68,8 +69,8 @@ struct pmd_core_cfg {
 	/**< Number of queues ready to enter power optimized state */
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
-} __rte_cache_aligned;
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+};
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v5 5/6] service: keep per-lcore state in lcore variable
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
                                   ` (3 preceding siblings ...)
  2024-02-28 10:09                 ` [RFC v5 4/6] power: keep per-lcore " Mattias Rönnblom
@ 2024-02-28 10:09                 ` Mattias Rönnblom
  2024-02-28 10:09                 ` [RFC v5 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-28 10:09 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/rte_service.c | 118 ++++++++++++++++++++---------------
 1 file changed, 67 insertions(+), 51 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d959c91459..5429ddce41 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,11 +102,12 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
+	else {
+		struct core_state *cs;
+		RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+			memset(cs, 0, sizeof(struct core_state));
 	}
 
 	int i;
@@ -122,7 +124,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +137,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +286,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +293,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +454,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +467,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +489,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +535,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +551,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +572,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +589,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +641,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +693,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +711,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +736,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +760,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +784,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +814,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +823,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +848,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +859,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +867,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +875,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +884,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +900,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +947,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +976,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +988,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1027,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v5 6/6] eal: keep per-lcore power intrinsics state in lcore variable
  2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
                                   ` (4 preceding siblings ...)
  2024-02-28 10:09                 ` [RFC v5 5/6] service: " Mattias Rönnblom
@ 2024-02-28 10:09                 ` Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-02-28 10:09 UTC (permalink / raw)
  To: dev; +Cc: hofors, Morten Brørup, Stephen Hemminger, Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 532a2e646b..23d1761f0a 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -4,6 +4,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -12,10 +13,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} __rte_cache_aligned wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -170,7 +175,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -262,7 +267,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -301,8 +306,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v5 1/6] eal: add static per-lcore memory allocation facility
  2024-02-28 10:09                 ` [RFC v5 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-03-19 12:52                   ` Konstantin Ananyev
  2024-03-20 10:24                     ` Mattias Rönnblom
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Konstantin Ananyev @ 2024-03-19 12:52 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Morten Brørup, Stephen Hemminger


Hi Mattias,
> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small chunks of often-used data, which is related logically, but where
> there are performance benefits to reap from having updates being local
> to an lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.

Thanks for the RFC, very interesting one.
Few comments/questions below. 

 
> RFC v5:
>  * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
>  * The RTE_LCORE_VAR_GET() and SET() convience access macros
>    covered an uncommon use case, where the lcore value is of a
>    primitive type, rather than a struct, and is thus eliminated
>    from the API. (Morten Brørup)
>  * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
>    RTE_LCORE_VAR_VALUE().
>  * The underscores are removed from __rte_lcore_var_lcore_ptr() to
>    signal that this function is a part of the public API.
>  * Macro arguments are documented.
> 
> RFV v4:
>  * Replace large static array with libc heap-allocated memory. One
>    implication of this change is there no longer exists a fixed upper
>    bound for the total amount of memory used by lcore variables.
>    RTE_MAX_LCORE_VAR has changed meaning, and now represent the
>    maximum size of any individual lcore variable value.
>  * Fix issues in example. (Morten Brørup)
>  * Improve access macro type checking. (Morten Brørup)
>  * Refer to the lcore variable handle as "handle" and not "name" in
>    various macros.
>  * Document lack of thread safety in rte_lcore_var_alloc().
>  * Provide API-level assurance the lcore variable handle is
>    always non-NULL, to all applications to use NULL to mean
>    "not yet allocated".
>  * Note zero-sized allocations are not allowed.
>  * Give API-level guarantee the lcore variable values are zeroed.
> 
> RFC v3:
>  * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>  * Update example to reflect FOREACH macro name change (in RFC v2).
> 
> RFC v2:
>  * Use alignof to derive alignment requirements. (Morten Brørup)
>  * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>    *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>  * Allow user-specified alignment, but limit max to cache line size.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> ---
>  config/rte_config.h                   |   1 +
>  doc/api/doxy-api-index.md             |   1 +
>  lib/eal/common/eal_common_lcore_var.c |  68 +++++
>  lib/eal/common/meson.build            |   1 +
>  lib/eal/include/meson.build           |   1 +
>  lib/eal/include/rte_lcore_var.h       | 368 ++++++++++++++++++++++++++
>  lib/eal/version.map                   |   4 +
>  7 files changed, 444 insertions(+)
>  create mode 100644 lib/eal/common/eal_common_lcore_var.c
>  create mode 100644 lib/eal/include/rte_lcore_var.h
> 
> diff --git a/config/rte_config.h b/config/rte_config.h
> index d743a5c3d3..0dac33d3b9 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -41,6 +41,7 @@
>  /* EAL defines */
>  #define RTE_CACHE_GUARD_LINES 1
>  #define RTE_MAX_HEAPS 32
> +#define RTE_MAX_LCORE_VAR 1048576
>  #define RTE_MAX_MEMSEG_LISTS 128
>  #define RTE_MAX_MEMSEG_PER_LIST 8192
>  #define RTE_MAX_MEM_MB_PER_LIST 32768
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index 8c1eb8fafa..a3b8391570 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>    [interrupts](@ref rte_interrupts.h),
>    [launch](@ref rte_launch.h),
>    [lcore](@ref rte_lcore.h),
> +  [lcore-varible](@ref rte_lcore_var.h),
>    [per-lcore](@ref rte_per_lcore.h),
>    [service cores](@ref rte_service.h),
>    [keepalive](@ref rte_keepalive.h),
> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
> new file mode 100644
> index 0000000000..5c353ebd46
> --- /dev/null
> +++ b/lib/eal/common/eal_common_lcore_var.c
> @@ -0,0 +1,68 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#include <inttypes.h>
> +
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +
> +#include <rte_lcore_var.h>
> +
> +#include "eal_private.h"
> +
> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> +
> +static void *lcore_buffer;
> +static size_t offset = RTE_MAX_LCORE_VAR;
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +	void *handle;
> +	void *value;
> +
> +	offset = RTE_ALIGN_CEIL(offset, align);
> +
> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> +					     LCORE_BUFFER_SIZE);

Hmm... do I get it right: if offset is <= then RTE_MAX_LCORE_VAR,  and  offset + size > RTE_MAX_LCORE_VAR
we simply overwrite lcore_buffer with newly allocated buffer of the same size?
I understand that you expect it just never to happen (total size of all lcore vars never exceed 1MB), but still
I think we need to handle it in a some better way then just ignoring such possibility...
Might be RTE_VERIFY() at least?   

As a more generic question - do we need to support LCORE_VAR for dlopen()s that could happen after rte_eal_init()
is called and LCORE threads were created?
Because, if no, then we probably can make this construction much more flexible:
one buffer per LCORE, allocate on demand, etc. 

> +		RTE_VERIFY(lcore_buffer != NULL);
> +
> +		offset = 0;
> +	}
> +
> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> +
> +	offset += size;
> +
> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> +		memset(value, 0, size);
> +
> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> +		"%"PRIuPTR"-byte alignment", size, align);
> +
> +	return handle;
> +}
> +
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align)
> +{
> +	/* Having the per-lcore buffer size aligned on cache lines
> +	 * assures as well as having the base pointer aligned on cache
> +	 * size assures that aligned offsets also translate to alipgned
> +	 * pointers across all values.
> +	 */
> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> +	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
> +
> +	/* '0' means asking for worst-case alignment requirements */
> +	if (align == 0)
> +		align = alignof(max_align_t);
> +
> +	RTE_ASSERT(rte_is_power_of_2(align));
> +
> +	return lcore_var_alloc(size, align);
> +}

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v5 1/6] eal: add static per-lcore memory allocation facility
  2024-03-19 12:52                   ` Konstantin Ananyev
@ 2024-03-20 10:24                     ` Mattias Rönnblom
  2024-03-20 14:18                       ` Konstantin Ananyev
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-03-20 10:24 UTC (permalink / raw)
  To: Konstantin Ananyev, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger

On 2024-03-19 13:52, Konstantin Ananyev wrote:
> 
> Hi Mattias,
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small chunks of often-used data, which is related logically, but where
>> there are performance benefits to reap from having updates being local
>> to an lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
> 
> Thanks for the RFC, very interesting one.
> Few comments/questions below.
> 
>   
>> RFC v5:
>>   * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
>>   * The RTE_LCORE_VAR_GET() and SET() convience access macros
>>     covered an uncommon use case, where the lcore value is of a
>>     primitive type, rather than a struct, and is thus eliminated
>>     from the API. (Morten Brørup)
>>   * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
>>     RTE_LCORE_VAR_VALUE().
>>   * The underscores are removed from __rte_lcore_var_lcore_ptr() to
>>     signal that this function is a part of the public API.
>>   * Macro arguments are documented.
>>
>> RFV v4:
>>   * Replace large static array with libc heap-allocated memory. One
>>     implication of this change is there no longer exists a fixed upper
>>     bound for the total amount of memory used by lcore variables.
>>     RTE_MAX_LCORE_VAR has changed meaning, and now represent the
>>     maximum size of any individual lcore variable value.
>>   * Fix issues in example. (Morten Brørup)
>>   * Improve access macro type checking. (Morten Brørup)
>>   * Refer to the lcore variable handle as "handle" and not "name" in
>>     various macros.
>>   * Document lack of thread safety in rte_lcore_var_alloc().
>>   * Provide API-level assurance the lcore variable handle is
>>     always non-NULL, to all applications to use NULL to mean
>>     "not yet allocated".
>>   * Note zero-sized allocations are not allowed.
>>   * Give API-level guarantee the lcore variable values are zeroed.
>>
>> RFC v3:
>>   * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>>   * Update example to reflect FOREACH macro name change (in RFC v2).
>>
>> RFC v2:
>>   * Use alignof to derive alignment requirements. (Morten Brørup)
>>   * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>>     *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>>   * Allow user-specified alignment, but limit max to cache line size.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>> ---
>>   config/rte_config.h                   |   1 +
>>   doc/api/doxy-api-index.md             |   1 +
>>   lib/eal/common/eal_common_lcore_var.c |  68 +++++
>>   lib/eal/common/meson.build            |   1 +
>>   lib/eal/include/meson.build           |   1 +
>>   lib/eal/include/rte_lcore_var.h       | 368 ++++++++++++++++++++++++++
>>   lib/eal/version.map                   |   4 +
>>   7 files changed, 444 insertions(+)
>>   create mode 100644 lib/eal/common/eal_common_lcore_var.c
>>   create mode 100644 lib/eal/include/rte_lcore_var.h
>>
>> diff --git a/config/rte_config.h b/config/rte_config.h
>> index d743a5c3d3..0dac33d3b9 100644
>> --- a/config/rte_config.h
>> +++ b/config/rte_config.h
>> @@ -41,6 +41,7 @@
>>   /* EAL defines */
>>   #define RTE_CACHE_GUARD_LINES 1
>>   #define RTE_MAX_HEAPS 32
>> +#define RTE_MAX_LCORE_VAR 1048576
>>   #define RTE_MAX_MEMSEG_LISTS 128
>>   #define RTE_MAX_MEMSEG_PER_LIST 8192
>>   #define RTE_MAX_MEM_MB_PER_LIST 32768
>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>> index 8c1eb8fafa..a3b8391570 100644
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>>     [interrupts](@ref rte_interrupts.h),
>>     [launch](@ref rte_launch.h),
>>     [lcore](@ref rte_lcore.h),
>> +  [lcore-varible](@ref rte_lcore_var.h),
>>     [per-lcore](@ref rte_per_lcore.h),
>>     [service cores](@ref rte_service.h),
>>     [keepalive](@ref rte_keepalive.h),
>> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
>> new file mode 100644
>> index 0000000000..5c353ebd46
>> --- /dev/null
>> +++ b/lib/eal/common/eal_common_lcore_var.c
>> @@ -0,0 +1,68 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#include <inttypes.h>
>> +
>> +#include <rte_common.h>
>> +#include <rte_debug.h>
>> +#include <rte_log.h>
>> +
>> +#include <rte_lcore_var.h>
>> +
>> +#include "eal_private.h"
>> +
>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>> +
>> +static void *lcore_buffer;
>> +static size_t offset = RTE_MAX_LCORE_VAR;
>> +
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	void *handle;
>> +	void *value;
>> +
>> +	offset = RTE_ALIGN_CEIL(offset, align);
>> +
>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>> +					     LCORE_BUFFER_SIZE);
> 
> Hmm... do I get it right: if offset is <= then RTE_MAX_LCORE_VAR,  and  offset + size > RTE_MAX_LCORE_VAR
> we simply overwrite lcore_buffer with newly allocated buffer of the same size?

No, it's just the pointer that is overwritten. The old buffer will 
remain in memory.

> I understand that you expect it just never to happen (total size of all lcore vars never exceed 1MB), but still
> I think we need to handle it in a some better way then just ignoring such possibility...
> Might be RTE_VERIFY() at least?
> 

In this revision of the patch set, RTE_MAX_LCORE_VAR does not represent 
an upper bound for the sum of all lcore variables' size, but rather only 
the maximum size of a single lcore variable.

Variable alignment and size constraints are RTE_ASSERT()ed at the point 
of allocation. One could argue they should be RTE_VERIFY()-ed instead, 
since there aren't any performance constraints.

> As a more generic question - do we need to support LCORE_VAR for dlopen()s that could happen after rte_eal_init()
> is called and LCORE threads were created?

Yes, allocations after rte_eal_init() (caused by dlopen() or otherwise) 
must be allowed imo, and are allowed. Otherwise applications sitting on 
top of DPDK can't use this facility.

> Because, if no, then we probably can make this construction much more flexible:
> one buffer per LCORE, allocate on demand, etc.
> 

On-demand allocations are already supported, but one can't do free(). 
That's why I've called what this module provide "static allocation", 
while it may be more appropriately described as "dynamic allocation 
without deallocation".

"True" dynamic memory allocation of per-lcore memory would be very 
useful, but is an entirely different beast in terms of complexity and 
(if to be usable in the packet processing fast path) performance 
requirements.

"True" dynamic memory allocation would also result in something less 
compact (at least if you use the usual pattern with a per-object heap 
header).

>> +		RTE_VERIFY(lcore_buffer != NULL);
>> +
>> +		offset = 0;
>> +	}
>> +
>> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
>> +
>> +	offset += size;
>> +
>> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
>> +		memset(value, 0, size);
>> +
>> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>> +		"%"PRIuPTR"-byte alignment", size, align);
>> +
>> +	return handle;
>> +}
>> +
>> +void *
>> +rte_lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	/* Having the per-lcore buffer size aligned on cache lines
>> +	 * assures as well as having the base pointer aligned on cache
>> +	 * size assures that aligned offsets also translate to alipgned
>> +	 * pointers across all values.
>> +	 */
>> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
>> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
>> +	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
>> +
>> +	/* '0' means asking for worst-case alignment requirements */
>> +	if (align == 0)
>> +		align = alignof(max_align_t);
>> +
>> +	RTE_ASSERT(rte_is_power_of_2(align));
>> +
>> +	return lcore_var_alloc(size, align);
>> +}

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v5 1/6] eal: add static per-lcore memory allocation facility
  2024-03-20 10:24                     ` Mattias Rönnblom
@ 2024-03-20 14:18                       ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-03-20 14:18 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger



> >> Introduce DPDK per-lcore id variables, or lcore variables for short.
> >>
> >> An lcore variable has one value for every current and future lcore
> >> id-equipped thread.
> >>
> >> The primary <rte_lcore_var.h> use case is for statically allocating
> >> small chunks of often-used data, which is related logically, but where
> >> there are performance benefits to reap from having updates being local
> >> to an lcore.
> >>
> >> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> >> _Thread_local), but decoupling the values' life time with that of the
> >> threads.
> >>
> >> Lcore variables are also similar in terms of functionality provided by
> >> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >> build-time machinery. DPCPU uses linker scripts, which effectively
> >> prevents the reuse of its, otherwise seemingly viable, approach.
> >>
> >> The currently-prevailing way to solve the same problem as lcore
> >> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> >> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >> lcore variables over this approach is that data related to the same
> >> lcore now is close (spatially, in memory), rather than data used by
> >> the same module, which in turn avoid excessive use of padding,
> >> polluting caches with unused data.
> >
> > Thanks for the RFC, very interesting one.
> > Few comments/questions below.
> >
> >
> >> RFC v5:
> >>   * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
> >>   * The RTE_LCORE_VAR_GET() and SET() convience access macros
> >>     covered an uncommon use case, where the lcore value is of a
> >>     primitive type, rather than a struct, and is thus eliminated
> >>     from the API. (Morten Brørup)
> >>   * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
> >>     RTE_LCORE_VAR_VALUE().
> >>   * The underscores are removed from __rte_lcore_var_lcore_ptr() to
> >>     signal that this function is a part of the public API.
> >>   * Macro arguments are documented.
> >>
> >> RFV v4:
> >>   * Replace large static array with libc heap-allocated memory. One
> >>     implication of this change is there no longer exists a fixed upper
> >>     bound for the total amount of memory used by lcore variables.
> >>     RTE_MAX_LCORE_VAR has changed meaning, and now represent the
> >>     maximum size of any individual lcore variable value.
> >>   * Fix issues in example. (Morten Brørup)
> >>   * Improve access macro type checking. (Morten Brørup)
> >>   * Refer to the lcore variable handle as "handle" and not "name" in
> >>     various macros.
> >>   * Document lack of thread safety in rte_lcore_var_alloc().
> >>   * Provide API-level assurance the lcore variable handle is
> >>     always non-NULL, to all applications to use NULL to mean
> >>     "not yet allocated".
> >>   * Note zero-sized allocations are not allowed.
> >>   * Give API-level guarantee the lcore variable values are zeroed.
> >>
> >> RFC v3:
> >>   * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
> >>   * Update example to reflect FOREACH macro name change (in RFC v2).
> >>
> >> RFC v2:
> >>   * Use alignof to derive alignment requirements. (Morten Brørup)
> >>   * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
> >>     *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
> >>   * Allow user-specified alignment, but limit max to cache line size.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >> ---
> >>   config/rte_config.h                   |   1 +
> >>   doc/api/doxy-api-index.md             |   1 +
> >>   lib/eal/common/eal_common_lcore_var.c |  68 +++++
> >>   lib/eal/common/meson.build            |   1 +
> >>   lib/eal/include/meson.build           |   1 +
> >>   lib/eal/include/rte_lcore_var.h       | 368 ++++++++++++++++++++++++++
> >>   lib/eal/version.map                   |   4 +
> >>   7 files changed, 444 insertions(+)
> >>   create mode 100644 lib/eal/common/eal_common_lcore_var.c
> >>   create mode 100644 lib/eal/include/rte_lcore_var.h
> >>
> >> diff --git a/config/rte_config.h b/config/rte_config.h
> >> index d743a5c3d3..0dac33d3b9 100644
> >> --- a/config/rte_config.h
> >> +++ b/config/rte_config.h
> >> @@ -41,6 +41,7 @@
> >>   /* EAL defines */
> >>   #define RTE_CACHE_GUARD_LINES 1
> >>   #define RTE_MAX_HEAPS 32
> >> +#define RTE_MAX_LCORE_VAR 1048576
> >>   #define RTE_MAX_MEMSEG_LISTS 128
> >>   #define RTE_MAX_MEMSEG_PER_LIST 8192
> >>   #define RTE_MAX_MEM_MB_PER_LIST 32768
> >> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> >> index 8c1eb8fafa..a3b8391570 100644
> >> --- a/doc/api/doxy-api-index.md
> >> +++ b/doc/api/doxy-api-index.md
> >> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
> >>     [interrupts](@ref rte_interrupts.h),
> >>     [launch](@ref rte_launch.h),
> >>     [lcore](@ref rte_lcore.h),
> >> +  [lcore-varible](@ref rte_lcore_var.h),
> >>     [per-lcore](@ref rte_per_lcore.h),
> >>     [service cores](@ref rte_service.h),
> >>     [keepalive](@ref rte_keepalive.h),
> >> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
> >> new file mode 100644
> >> index 0000000000..5c353ebd46
> >> --- /dev/null
> >> +++ b/lib/eal/common/eal_common_lcore_var.c
> >> @@ -0,0 +1,68 @@
> >> +/* SPDX-License-Identifier: BSD-3-Clause
> >> + * Copyright(c) 2024 Ericsson AB
> >> + */
> >> +
> >> +#include <inttypes.h>
> >> +
> >> +#include <rte_common.h>
> >> +#include <rte_debug.h>
> >> +#include <rte_log.h>
> >> +
> >> +#include <rte_lcore_var.h>
> >> +
> >> +#include "eal_private.h"
> >> +
> >> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> >> +
> >> +static void *lcore_buffer;
> >> +static size_t offset = RTE_MAX_LCORE_VAR;
> >> +
> >> +static void *
> >> +lcore_var_alloc(size_t size, size_t align)
> >> +{
> >> +	void *handle;
> >> +	void *value;
> >> +
> >> +	offset = RTE_ALIGN_CEIL(offset, align);
> >> +
> >> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> >> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >> +					     LCORE_BUFFER_SIZE);
> >
> > Hmm... do I get it right: if offset is <= then RTE_MAX_LCORE_VAR,  and  offset + size > RTE_MAX_LCORE_VAR
> > we simply overwrite lcore_buffer with newly allocated buffer of the same size?
> 
> No, it's just the pointer that is overwritten. The old buffer will
> remain in memory.

Ah ok, I missed that you changed the handle to pointer conversion in new version too.
Now handle is not just an offset, but an actual pointer to lcore0 var, so all we have is to add
lcore_idx offset.
Makes sense, thanks for clarifying.
LGTM then.
 

> 
> > I understand that you expect it just never to happen (total size of all lcore vars never exceed 1MB), but still
> > I think we need to handle it in a some better way then just ignoring such possibility...
> > Might be RTE_VERIFY() at least?
> >
> 
> In this revision of the patch set, RTE_MAX_LCORE_VAR does not represent
> an upper bound for the sum of all lcore variables' size, but rather only
> the maximum size of a single lcore variable.
> 
> Variable alignment and size constraints are RTE_ASSERT()ed at the point
> of allocation. One could argue they should be RTE_VERIFY()-ed instead,
> since there aren't any performance constraints.
> 
> > As a more generic question - do we need to support LCORE_VAR for dlopen()s that could happen after rte_eal_init()
> > is called and LCORE threads were created?
> 
> Yes, allocations after rte_eal_init() (caused by dlopen() or otherwise)
> must be allowed imo, and are allowed. Otherwise applications sitting on
> top of DPDK can't use this facility.
> 
> > Because, if no, then we probably can make this construction much more flexible:
> > one buffer per LCORE, allocate on demand, etc.
> >
> 
> On-demand allocations are already supported, but one can't do free().
> That's why I've called what this module provide "static allocation",
> while it may be more appropriately described as "dynamic allocation
> without deallocation".
> 
> "True" dynamic memory allocation of per-lcore memory would be very
> useful, but is an entirely different beast in terms of complexity and
> (if to be usable in the packet processing fast path) performance
> requirements.
> 
> "True" dynamic memory allocation would also result in something less
> compact (at least if you use the usual pattern with a per-object heap
> header).
> 
> >> +		RTE_VERIFY(lcore_buffer != NULL);
> >> +
> >> +		offset = 0;
> >> +	}
> >> +
> >> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> >> +
> >> +	offset += size;
> >> +
> >> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> >> +		memset(value, 0, size);
> >> +
> >> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> >> +		"%"PRIuPTR"-byte alignment", size, align);
> >> +
> >> +	return handle;
> >> +}
> >> +
> >> +void *
> >> +rte_lcore_var_alloc(size_t size, size_t align)
> >> +{
> >> +	/* Having the per-lcore buffer size aligned on cache lines
> >> +	 * assures as well as having the base pointer aligned on cache
> >> +	 * size assures that aligned offsets also translate to alipgned
> >> +	 * pointers across all values.
> >> +	 */
> >> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> >> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> >> +	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
> >> +
> >> +	/* '0' means asking for worst-case alignment requirements */
> >> +	if (align == 0)
> >> +		align = alignof(max_align_t);
> >> +
> >> +	RTE_ASSERT(rte_is_power_of_2(align));
> >> +
> >> +	return lcore_var_alloc(size, align);
> >> +}

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v6 0/6] Lcore variables
  2024-02-28 10:09                 ` [RFC v5 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-03-19 12:52                   ` Konstantin Ananyev
@ 2024-05-06  8:27                   ` Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                       ` (6 more replies)
  1 sibling, 7 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-05-06  8:27 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, Mattias Rönnblom

This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

One thing is unclear to the author is how this API relates to a
potential future per-lcore dynamic allocator (e.g., a per-lcore heap).

Mattias Rönnblom (6):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 app/test/meson.build                  |   1 +
 app/test/test_lcore_var.c             | 432 ++++++++++++++++++++++++++
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  69 ++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/common/rte_random.c           |  28 +-
 lib/eal/common/rte_service.c          | 115 +++----
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 384 +++++++++++++++++++++++
 lib/eal/version.map                   |   3 +
 lib/eal/x86/rte_power_intrinsics.c    |  17 +-
 lib/power/rte_power_pmd_mgmt.c        |  34 +-
 13 files changed, 1000 insertions(+), 87 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v6 1/6] eal: add static per-lcore memory allocation facility
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
@ 2024-05-06  8:27                     ` Mattias Rönnblom
  2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 2/6] eal: add lcore variable test suite Mattias Rönnblom
                                       ` (5 subsequent siblings)
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-05-06  8:27 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small chunks of often-used data, which is related logically, but where
there are performance benefits to reap from having updates being local
to an lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  69 +++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 384 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   3 +
 7 files changed, 460 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 8c1eb8fafa..a3b8391570 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..74ad8272ec
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..cfbcac41dd
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,384 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * copy for each current and future lcore id-equipped thread, with the
+ * total number of copies amounting to @c RTE_MAX_LCORE. The value of
+ * an lcore variable for a particular lcore id is independent from
+ * other values (for other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handler type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of a lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handler pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ *
+ * @param value
+ *   A pointer set successivly set to point to lcore variable value
+ *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The id of the variable, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 3df50c3fbb..7702642785 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,9 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v6 2/6] eal: add lcore variable test suite
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-05-06  8:27                     ` Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
                                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-05-06  8:27 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, Mattias Rönnblom

Add test suite to exercise the <rte_lcore_var.h> API.

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 433 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 7d909039ae..846affa98c 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..e07d13460f
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,432 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v6 3/6] random: keep PRNG state in lcore variable
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 2/6] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-05-06  8:27                     ` Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 4/6] power: keep per-lcore " Mattias Rönnblom
                                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-05-06  8:27 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v6 4/6] power: keep per-lcore state in lcore variable
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
                                       ` (2 preceding siblings ...)
  2024-05-06  8:27                     ` [RFC v6 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-05-06  8:27                     ` Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 5/6] service: " Mattias Rönnblom
                                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-05-06  8:27 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

RFC v3:
 * Replace for loop with FOREACH macro.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/power/rte_power_pmd_mgmt.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a5139dd4f7 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v6 5/6] service: keep per-lcore state in lcore variable
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
                                       ` (3 preceding siblings ...)
  2024-05-06  8:27                     ` [RFC v6 4/6] power: keep per-lcore " Mattias Rönnblom
@ 2024-05-06  8:27                     ` Mattias Rönnblom
  2024-05-06  8:27                     ` [RFC v6 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  2024-09-02 14:42                     ` [RFC v6 0/6] Lcore variables Morten Brørup
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-05-06  8:27 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/rte_service.c | 115 +++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..03379f1588 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +449,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +462,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +484,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +530,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +546,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +567,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +584,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +636,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +688,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +706,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +731,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +755,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +779,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +809,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +818,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +879,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +895,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +942,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +971,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +983,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1022,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [RFC v6 6/6] eal: keep per-lcore power intrinsics state in lcore variable
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
                                       ` (4 preceding siblings ...)
  2024-05-06  8:27                     ` [RFC v6 5/6] service: " Mattias Rönnblom
@ 2024-05-06  8:27                     ` Mattias Rönnblom
  2024-09-02 14:42                     ` [RFC v6 0/6] Lcore variables Morten Brørup
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-05-06  8:27 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [RFC v6 0/6] Lcore variables
  2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
                                       ` (5 preceding siblings ...)
  2024-05-06  8:27                     ` [RFC v6 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
@ 2024-09-02 14:42                     ` Morten Brørup
  2024-09-10  6:41                       ` Mattias Rönnblom
  6 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-02 14:42 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: hofors, Stephen Hemminger, Konstantin Ananyev

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Monday, 6 May 2024 10.27
> 
> This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
> data allocation.
> 
> Please refer to the <rte_lcore_var.h> API documentation for both a
> rationale for this new API, and a comparison to the alternatives
> available.
> 
> The adoption of this API would affect many different DPDK modules, but
> the author updated only a few, mostly to serve as examples in this
> RFC, and to iron out some, but surely not all, wrinkles in the API.
> 
> The question on how to best allocate static per-lcore memory has been
> up several times on the dev mailing list, for example in the thread on
> "random: use per lcore state" RFC by Stephen Hemminger.
> 
> Lcore variables are surely not the answer to all your per-lcore-data
> needs, since it only allows for more-or-less static allocation. In the
> author's opinion, it does however provide a reasonably simple and
> clean and seemingly very much performant solution to a real problem.

This RFC is an improvement of the design pattern of allocating a RTE_MAX_LCORE sized array of structs per library, which typically introduces a lot of padding, and thus wastes L1 data cache.

I would like to see it as a patch getting into DPDK 24.11.

> 
> One thing is unclear to the author is how this API relates to a
> potential future per-lcore dynamic allocator (e.g., a per-lcore heap).

Perfection is the enemy of progress.
Let's consider this a 1:1 upgrade of a existing design pattern, and not worry about how to broaden its scope in the future.

-Morten


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v6 0/6] Lcore variables
  2024-09-02 14:42                     ` [RFC v6 0/6] Lcore variables Morten Brørup
@ 2024-09-10  6:41                       ` Mattias Rönnblom
  2024-09-10 15:41                         ` Stephen Hemminger
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  6:41 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev

On 2024-09-02 16:42, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Monday, 6 May 2024 10.27
>>
>> This RFC presents a new API <rte_lcore_var.h> for static per-lcore id
>> data allocation.
>>
>> Please refer to the <rte_lcore_var.h> API documentation for both a
>> rationale for this new API, and a comparison to the alternatives
>> available.
>>
>> The adoption of this API would affect many different DPDK modules, but
>> the author updated only a few, mostly to serve as examples in this
>> RFC, and to iron out some, but surely not all, wrinkles in the API.
>>
>> The question on how to best allocate static per-lcore memory has been
>> up several times on the dev mailing list, for example in the thread on
>> "random: use per lcore state" RFC by Stephen Hemminger.
>>
>> Lcore variables are surely not the answer to all your per-lcore-data
>> needs, since it only allows for more-or-less static allocation. In the
>> author's opinion, it does however provide a reasonably simple and
>> clean and seemingly very much performant solution to a real problem.
> 
> This RFC is an improvement of the design pattern of allocating a RTE_MAX_LCORE sized array of structs per library, which typically introduces a lot of padding, and thus wastes L1 data cache.
> 
> I would like to see it as a patch getting into DPDK 24.11.
> 

I would be happy to develop and maintain this DPDK module.

I will submit this as a v1 PATCH.

>>
>> One thing is unclear to the author is how this API relates to a
>> potential future per-lcore dynamic allocator (e.g., a per-lcore heap).
> 
> Perfection is the enemy of progress.
> Let's consider this a 1:1 upgrade of a existing design pattern, and not worry about how to broaden its scope in the future.
> 
> -Morten
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH 0/6] Lcore variables
  2024-05-06  8:27                     ` [RFC v6 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-10  7:03                       ` Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                           ` (5 more replies)
  0 siblings, 6 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  7:03 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

This patch set introduces a new API <rte_lcore_var.h> for static
per-lcore id data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

Mattias Rönnblom (6):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 MAINTAINERS                            |   6 +
 app/test/meson.build                   |   1 +
 app/test/test_lcore_var.c              | 432 +++++++++++++++++++++++++
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  69 ++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/common/rte_random.c            |  28 +-
 lib/eal/common/rte_service.c           | 115 ++++---
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 384 ++++++++++++++++++++++
 lib/eal/version.map                    |   3 +
 lib/eal/x86/rte_power_intrinsics.c     |  17 +-
 lib/power/rte_power_pmd_mgmt.c         |  34 +-
 15 files changed, 1020 insertions(+), 87 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
@ 2024-09-10  7:03                         ` Mattias Rönnblom
  2024-09-10  9:32                           ` Morten Brørup
                                             ` (2 more replies)
  2024-09-10  7:03                         ` [PATCH 2/6] eal: add lcore variable test suite Mattias Rönnblom
                                           ` (4 subsequent siblings)
  5 siblings, 3 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  7:03 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH:
 * Update MAINTAINERS and release notes.
 * Stop covering included files in extern "C" {}.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.
---
 MAINTAINERS                            |   6 +
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  69 +++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 384 +++++++++++++++++++++++++
 lib/eal/version.map                    |   3 +
 9 files changed, 480 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c5a703b5c0..362d9a3f28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
 F: lib/eal/common/rte_random.c
 F: app/test/test_rand_perf.c
 
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
 ARM v7
 M: Wathsala Vithanage <wathsala.vithanage@arm.com>
 F: config/arm/
diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f9f0300126..07d7cbc66c 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 0ff70d9057..adb8eb404d 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -55,6 +55,20 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added EAL per-lcore static memory allocation facility.**
+
+    Added EAL API <rte_lcore_var.h> for statically allocating small,
+    frequently-accessed data structures, for which one instance should
+    exist for each lcore.
+
+    With lcore variables, data is organized spatially on a per-lcore
+    basis, rather than per library or PMD, avoiding the need for cache
+    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+    reduces CPU cache internal fragmentation, improving performance.
+
+    Lcore variables are similar to thread-local storage (TLS, e.g.,
+    C11 _Thread_local), but decoupling the values' life time from that
+    of the threads.
 
 Removed Items
 -------------
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..74ad8272ec
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..7d3178c424
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,384 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * copy for each current and future lcore id-equipped thread, with the
+ * total number of copies amounting to @c RTE_MAX_LCORE. The value of
+ * an lcore variable for a particular lcore id is independent from
+ * other values (for other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handler type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of a lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handler pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ *
+ * @param value
+ *   A pointer set successivly set to point to lcore variable value
+ *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The id of the variable, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index e3ff412683..5f5a3522c0 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,9 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH 2/6] eal: add lcore variable test suite
  2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-10  7:03                         ` Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
                                           ` (3 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  7:03 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Add test suite to exercise the <rte_lcore_var.h> API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 433 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..48279522f0 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..e07d13460f
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,432 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH 3/6] random: keep PRNG state in lcore variable
  2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 2/6] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-09-10  7:03                         ` Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 4/6] power: keep per-lcore " Mattias Rönnblom
                                           ` (2 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  7:03 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH 4/6] power: keep per-lcore state in lcore variable
  2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
                                           ` (2 preceding siblings ...)
  2024-09-10  7:03                         ` [PATCH 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-10  7:03                         ` Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 5/6] service: " Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  7:03 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Replace for loop with FOREACH macro.
---
 lib/power/rte_power_pmd_mgmt.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a5139dd4f7 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH 5/6] service: keep per-lcore state in lcore variable
  2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
                                           ` (3 preceding siblings ...)
  2024-09-10  7:03                         ` [PATCH 4/6] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-10  7:03                         ` Mattias Rönnblom
  2024-09-10  7:03                         ` [PATCH 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  7:03 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 115 +++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..03379f1588 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +449,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +462,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +484,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +530,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +546,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +567,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +584,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +636,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +688,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +706,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +731,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +755,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +779,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +809,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +818,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +879,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +895,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +942,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +971,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +983,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1022,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH 6/6] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
                                           ` (4 preceding siblings ...)
  2024-09-10  7:03                         ` [PATCH 5/6] service: " Mattias Rönnblom
@ 2024-09-10  7:03                         ` Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10  7:03 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-10  7:03                         ` [PATCH 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-10  9:32                           ` Morten Brørup
  2024-09-10 10:44                             ` Mattias Rönnblom
  2024-09-11 10:32                           ` Morten Brørup
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
  2 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-10  9:32 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Tuesday, 10 September 2024 09.04
> 
> Introduce DPDK per-lcore id variables, or lcore variables for short.

Throughout the descriptions and comments,
please replace "lcore id" with "lcore" (e.g. "per-lcore variables"),
when referring to the lcore, and not the index of the lcore.
(Your intention might be to highlight that it only covers threads with an lcore id,
but if that wasn't the case, you would refer to them as "threads" not "lcores".)
Except, of course, when referring to an actual lcore id, e.g. lcore_id function parameters.

Paraphrasing:
Consider the type of what you are referring to;
use "lcore" if its type is "thread", and
use "lcore id" if its type is "int".

I might be wrong here, but please think hard about it.

> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small, frequently-accessed data structures, for which one instance
> should exist for each lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> --

> +++ b/doc/api/doxy-api-index.md
> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>    [interrupts](@ref rte_interrupts.h),
>    [launch](@ref rte_launch.h),
>    [lcore](@ref rte_lcore.h),
> +  [lcore-varible](@ref rte_lcore_var.h),

Typo: varible -> variable


> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -55,6 +55,20 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
> 
> +* **Added EAL per-lcore static memory allocation facility.**
> +
> +    Added EAL API <rte_lcore_var.h> for statically allocating small,
> +    frequently-accessed data structures, for which one instance should
> +    exist for each lcore.
> +
> +    With lcore variables, data is organized spatially on a per-lcore
> +    basis, rather than per library or PMD, avoiding the need for cache
> +    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
> +    reduces CPU cache internal fragmentation, improving performance.
> +
> +    Lcore variables are similar to thread-local storage (TLS, e.g.,
> +    C11 _Thread_local), but decoupling the values' life time from that
> +    of the threads.

When referring to TLS, you might want to clarify that lcore variables are not instantiated for unregistered threads.


> +static void *lcore_buffer;
> +static size_t offset = RTE_MAX_LCORE_VAR;
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +	void *handle;
> +	void *value;
> +
> +	offset = RTE_ALIGN_CEIL(offset, align);
> +
> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> +					     LCORE_BUFFER_SIZE);
> +		RTE_VERIFY(lcore_buffer != NULL);
> +
> +		offset = 0;
> +	}

To determine if the lcore_buffer memory should be allocated, why not just check if lcore_buffer == NULL?
Then offset wouldn't need an initial value of RTE_MAX_LCORE_VAR.

> +
> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> +
> +	offset += size;
> +
> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> +		memset(value, 0, size);
> +
> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> +		"%"PRIuPTR"-byte alignment", size, align);
> +
> +	return handle;
> +}


> +/**
> + * @file
> + *
> + * RTE Per-lcore id variables

Suggest mentioning the short form too, e.g.:
"RTE Per-lcore id variables (RTE Lcore variables)"

> + *
> + * This API provides a mechanism to create and access per-lcore id
> + * variables in a space- and cycle-efficient manner.
> + *
> + * A per-lcore id variable (or lcore variable for short) has one value
> + * for each EAL thread and registered non-EAL thread.

And service thread.

> + * There is one
> + * copy for each current and future lcore id-equipped thread, with the

"one copy" -> "one instance"

> + * total number of copies amounting to @c RTE_MAX_LCORE. The value of

"copies" -> "instances"

> + * an lcore variable for a particular lcore id is independent from
> + * other values (for other lcore ids) within the same lcore variable.
> + *
> + * In order to access the values of an lcore variable, a handle is
> + * used. The type of the handle is a pointer to the value's type
> + * (e.g., for @c uint32_t lcore variable, the handle is a
> + * <code>uint32_t *</code>. The handler type is used to inform the

Typo: "handler" -> "handle", I think :-/
Found this typo multiple times; search-replace.

> + * access macros the type of the values. A handle may be passed
> + * between modules and threads just like any pointer, but its value
> + * must be treated as a an opaque identifier. An allocated handle
> + * never has the value NULL.
> + *
> + * @b Creation
> + *
> + * An lcore variable is created in two steps:
> + *  1. Define a lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
> + *  2. Allocate lcore variable storage and initialize the handle with
> + *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
> + *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
> + *     module initialization, but may be done at any time.
> + *
> + * An lcore variable is not tied to the owning thread's lifetime. It's
> + * available for use by any thread immediately after having been
> + * allocated, and continues to be available throughout the lifetime of
> + * the EAL.
> + *
> + * Lcore variables cannot and need not be freed.
> + *
> + * @b Access
> + *
> + * The value of any lcore variable for any lcore id may be accessed
> + * from any thread (including unregistered threads), but it should
> + * only be *frequently* read from or written to by the owner.
> + *
> + * Values of the same lcore variable but owned by to different lcore

Typo: to -> two

> + * ids may be frequently read or written by the owners without risking
> + * false sharing.
> + *
> + * An appropriate synchronization mechanism (e.g., atomic loads and
> + * stores) should employed to assure there are no data races between
> + * the owning thread and any non-owner threads accessing the same
> + * lcore variable instance.
> + *
> + * The value of the lcore variable for a particular lcore id is
> + * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
> + *
> + * A common pattern is for an EAL thread or a registered non-EAL
> + * thread to access its own lcore variable value. For this purpose, a
> + * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
> + *
> + * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
> + * pointer with the same type as the value, it may not be directly
> + * dereferenced and must be treated as an opaque identifier.
> + *
> + * Lcore variable handles and value pointers may be freely passed
> + * between different threads.
> + *
> + * @b Storage
> + *
> + * An lcore variable's values may by of a primitive type like @c int,

Two typos: "values may by" -> "value may be"

> + * but would more typically be a @c struct.
> + *
> + * The lcore variable handle introduces a per-variable (not
> + * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
> + * there are some memory footprint gains to be made by organizing all
> + * per-lcore id data for a particular module as one lcore variable
> + * (e.g., as a struct).
> + *
> + * An application may choose to define an lcore variable handle, which
> + * it then it goes on to never allocate.
> + *
> + * The size of a lcore variable's value must be less than the DPDK
> + * build-time constant @c RTE_MAX_LCORE_VAR.
> + *
> + * The lcore variable are stored in a series of lcore buffers, which
> + * are allocated from the libc heap. Heap allocation failures are
> + * treated as fatal.
> + *
> + * Lcore variables should generally *not* be @ref __rte_cache_aligned
> + * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
> + * of these constructs are designed to avoid false sharing. In the
> + * case of an lcore variable instance, the thread most recently
> + * accessing nearby data structures should almost-always the lcore

Missing word: should almost-always *be* the lcore variables' owner.


> + * variables' owner. Adding padding will increase the effective memory
> + * working set size, potentially reducing performance.
> + *
> + * Lcore variable values take on an initial value of zero.
> + *
> + * @b Example
> + *
> + * Below is an example of the use of an lcore variable:
> + *
> + * @code{.c}
> + * struct foo_lcore_state {
> + *         int a;
> + *         long b;
> + * };
> + *
> + * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
> + *
> + * long foo_get_a_plus_b(void)
> + * {
> + *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
> + *
> + *         return state->a + state->b;
> + * }
> + *
> + * RTE_INIT(rte_foo_init)
> + * {
> + *         RTE_LCORE_VAR_ALLOC(lcore_states);
> + *
> + *         struct foo_lcore_state *state;
> + *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
> + *                 (initialize 'state')

Consider: (initialize 'state') -> /* initialize 'state' */

> + *         }
> + *
> + *         (other initialization)

Consider: (other initialization) -> /* other initialization */

> + * }
> + * @endcode
> + *
> + *
> + * @b Alternatives
> + *
> + * Lcore variables are designed to replace a pattern exemplified below:
> + * @code{.c}
> + * struct __rte_cache_aligned foo_lcore_state {
> + *         int a;
> + *         long b;
> + *         RTE_CACHE_GUARD;
> + * };
> + *
> + * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
> + * @endcode
> + *
> + * This scheme is simple and effective, but has one drawback: the data
> + * is organized so that objects related to all lcores for a particular
> + * module is kept close in memory. At a bare minimum, this forces the
> + * use of cache-line alignment to avoid false sharing. With CPU

Consider adding: use of *padding to* cache-line alignment
My point here is:
This sentence should somehow include the word "padding".
This paragraph is not only aboud cache line alignment, it is primarily about padding.

> + * hardware prefetching and memory loads resulting from speculative
> + * execution (functions which seemingly are getting more eager faster
> + * than they are getting more intelligent), one or more "guard" cache
> + * lines may be required to separate one lcore's data from another's.
> + *
> + * Lcore variables has the upside of working with, not against, the

Typo: has -> have

> + * CPU's assumptions and for example next-line prefetchers may well
> + * work the way its designers intended (i.e., to the benefit, not
> + * detriment, of system performance).
> + *
> + * Another alternative to @ref rte_lcore_var.h is the @ref
> + * rte_per_lcore.h API, which make use of thread-local storage (TLS,

Typo: make -> makes

> + * e.g., GCC __thread or C11 _Thread_local). The main differences
> + * between by using the various forms of TLS (e.g., @ref
> + * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
> + * variables are:
> + *
> + *   * The existence and non-existence of a thread-local variable
> + *     instance follow that of particular thread's. The data cannot be

Typo: "thread's" -> "threads", I think. :-/

> + *     accessed before the thread has been created, nor after it has
> + *     exited. As a result, thread-local variables must initialized in

Missing word: must *be* initialized

> + *     a "lazy" manner (e.g., at the point of thread creation). Lcore
> + *     variables may be accessed immediately after having been
> + *     allocated (which may be prior any thread beyond the main
> + *     thread is running).
> + *   * A thread-local variable is duplicated across all threads in the
> + *     process, including unregistered non-EAL threads (i.e.,
> + *     "regular" threads). For DPDK applications heavily relying on
> + *     multi-threading (in conjunction to DPDK's "one thread per core"
> + *     pattern), either by having many concurrent threads or
> + *     creating/destroying threads at a high rate, an excessive use of
> + *     thread-local variables may cause inefficiencies (e.g.,
> + *     increased thread creation overhead due to thread-local storage
> + *     initialization or increased total RAM footprint usage). Lcore
> + *     variables *only* exist for threads with an lcore id.
> + *   * If data in thread-local storage may be shared between threads
> + *     (i.e., can a pointer to a thread-local variable be passed to
> + *     and successfully dereferenced by non-owning thread) depends on
> + *     the details of the TLS implementation. With GCC __thread and
> + *     GCC _Thread_local, such data sharing is supported. In the C11
> + *     standard, the result of accessing another thread's
> + *     _Thread_local object is implementation-defined. Lcore variable
> + *     instances may be accessed reliably by any thread.
> + */
> +
> +#include <stddef.h>
> +#include <stdalign.h>
> +
> +#include <rte_common.h>
> +#include <rte_config.h>
> +#include <rte_lcore.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * Given the lcore variable type, produces the type of the lcore
> + * variable handle.
> + */
> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
> +	type *
> +
> +/**
> + * Define a lcore variable handle.

Typo: "a lcore" -> "an lcore"
Found this typo multiple times; search-replace "a lcore".

> + *
> + * This macro defines a variable which is used as a handle to access
> + * the various per-lcore id instances of a per-lcore id variable.

Suggest:
"the various per-lcore id instances of a per-lcore id variable" ->
"the various instances of a per-lcore id variable"

> + *
> + * The aim with this macro is to make clear at the point of
> + * declaration that this is an lcore handler, rather than a regular
> + * pointer.
> + *
> + * Add @b static as a prefix in case the lcore variable are only to be

Typo: are -> is

> + * accessed from a particular translation unit.
> + */
> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle.
> + *
> + * The values of the lcore variable are initialized to zero.

Consider adding: "the lcore variable *instances* are initialized"
Found this typo multiple times; search-replace.

> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
> +	handle = rte_lcore_var_alloc(size, align)
> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle,
> + * with values aligned for any type of object.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
> +
> +/**
> + * Allocate space for an lcore variable of the size and alignment
> requirements
> + * suggested by the handler pointer type, and initialize its handle.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_ALLOC(handle)					\
> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
> +				       alignof(typeof(*(handle))))
> +
> +/**
> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
> + * means of a @ref RTE_INIT constructor.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> +	{								\
> +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
> +	}
> +
> +/**
> + * Allocate an explicitly-sized lcore variable by means of a @ref
> + * RTE_INIT constructor.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
> +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
> +
> +/**
> + * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_INIT(name)					\
> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> +	{								\
> +		RTE_LCORE_VAR_ALLOC(name);				\
> +	}
> +
> +/**
> + * Get void pointer to lcore variable instance with the specified
> + * lcore id.
> + *
> + * @param lcore_id
> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> + *   instances should be accessed. The lcore id need not be valid
> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
> + *   is also not valid (and thus should not be dereferenced).
> + * @param handle
> + *   The lcore variable handle.
> + */
> +static inline void *
> +rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
> +{
> +	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
> +}
> +
> +/**
> + * Get pointer to lcore variable instance with the specified lcore id.
> + *
> + * @param lcore_id
> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> + *   instances should be accessed. The lcore id need not be valid
> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
> + *   is also not valid (and thus should not be dereferenced).
> + * @param handle
> + *   The lcore variable handle.
> + */
> +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
> +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
> +
> +/**
> + * Get pointer to lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_VALUE(handle) \
> +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> +
> +/**
> + * Iterate over each lcore id's value for a lcore variable.
> + *
> + * @param value
> + *   A pointer set successivly set to point to lcore variable value

"set successivly set" -> "successivly set"

Thinking out loud, ignore at your preference:
During the RFC discussions, the term used for referring to an lcore variable was discussed;
we considered "pointer", but settled for "value".
Perhaps "instance" would be usable in comments like like the one describing this function...
"A pointer set successivly set to point to lcore variable value" ->
"A pointer set successivly set to point to lcore variable instance".
I don't know.


> + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
> + * @param handle
> + *   The lcore variable handle.
> + */
> +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
> +	for (unsigned int lcore_id =					\
> +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
> +	     lcore_id < RTE_MAX_LCORE;					\
> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
> +
> +/**
> + * Allocate space in the per-lcore id buffers for a lcore variable.
> + *
> + * The pointer returned is only an opaque identifer of the variable. To
> + * get an actual pointer to a particular instance of the variable use
> + * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
> + *
> + * The lcore variable values' memory is set to zero.
> + *
> + * The allocation is always successful, barring a fatal exhaustion of
> + * the per-lcore id buffer space.
> + *
> + * rte_lcore_var_alloc() is not multi-thread safe.
> + *
> + * @param size
> + *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
> + * @param align
> + *   If 0, the values will be suitably aligned for any kind of type
> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
> + *   on a multiple of *align*, which must be a power of 2 and equal or
> + *   less than @c RTE_CACHE_LINE_SIZE.
> + * @return
> + *   The id of the variable, stored in a void pointer value. The value

"id" -> "handle"

> + *   is always non-NULL.
> + */
> +__rte_experimental
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_LCORE_VAR_H_ */
> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index e3ff412683..5f5a3522c0 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -396,6 +396,9 @@ EXPERIMENTAL {
> 
>  	# added in 24.03
>  	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
> +
> +	rte_lcore_var_alloc;
> +	rte_lcore_var;

No such function: rte_lcore_var

>  };
> 
>  INTERNAL {
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-10  9:32                           ` Morten Brørup
@ 2024-09-10 10:44                             ` Mattias Rönnblom
  2024-09-10 13:07                               ` Morten Brørup
  2024-09-10 15:55                               ` Stephen Hemminger
  0 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-10 10:44 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand

On 2024-09-10 11:32, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Tuesday, 10 September 2024 09.04
>>
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> Throughout the descriptions and comments,
> please replace "lcore id" with "lcore" (e.g. "per-lcore variables"),
> when referring to the lcore, and not the index of the lcore.
> (Your intention might be to highlight that it only covers threads with an lcore id,
> but if that wasn't the case, you would refer to them as "threads" not "lcores".)
> Except, of course, when referring to an actual lcore id, e.g. lcore_id function parameters.

"lcore" is just another word for "EAL thread." The lcore variables exist 
in one instance for every thread with an lcore id, thus also for 
registered non-EAL threads (i.e., threads which are not lcores).

I've tried to summarize the (very confusing) terminology of DPDK's 
threading model here:
https://ericsson.github.io/dataplanebook/threading/threading.html#eal-threads

So, in my world, "per-lcore id variables" is pretty accurate. You could 
say "variables with per-lcore id values" if you want to make it even 
more clear, what's going on.

> 
> Paraphrasing:
> Consider the type of what you are referring to;
> use "lcore" if its type is "thread", and
> use "lcore id" if its type is "int".
> 
> I might be wrong here, but please think hard about it.
> 
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small, frequently-accessed data structures, for which one instance
>> should exist for each lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>
>> --
> 
>> +++ b/doc/api/doxy-api-index.md
>> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>>     [interrupts](@ref rte_interrupts.h),
>>     [launch](@ref rte_launch.h),
>>     [lcore](@ref rte_lcore.h),
>> +  [lcore-varible](@ref rte_lcore_var.h),
> 
> Typo: varible -> variable
> 
> 

I'll change it to "lcore variables" (no dash, plural).

>> +++ b/doc/guides/rel_notes/release_24_11.rst
>> @@ -55,6 +55,20 @@ New Features
>>        Also, make sure to start the actual text at the margin.
>>        =======================================================
>>
>> +* **Added EAL per-lcore static memory allocation facility.**
>> +
>> +    Added EAL API <rte_lcore_var.h> for statically allocating small,
>> +    frequently-accessed data structures, for which one instance should
>> +    exist for each lcore.
>> +
>> +    With lcore variables, data is organized spatially on a per-lcore
>> +    basis, rather than per library or PMD, avoiding the need for cache
>> +    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
>> +    reduces CPU cache internal fragmentation, improving performance.
>> +
>> +    Lcore variables are similar to thread-local storage (TLS, e.g.,
>> +    C11 _Thread_local), but decoupling the values' life time from that
>> +    of the threads.
> 
> When referring to TLS, you might want to clarify that lcore variables are not instantiated for unregistered threads.
> 

Isn't that clear from the first paragraph? Although it should say "per 
lcore id", rather than "per lcore."

> 
>> +static void *lcore_buffer;
>> +static size_t offset = RTE_MAX_LCORE_VAR;
>> +
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	void *handle;
>> +	void *value;
>> +
>> +	offset = RTE_ALIGN_CEIL(offset, align);
>> +
>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>> +					     LCORE_BUFFER_SIZE);
>> +		RTE_VERIFY(lcore_buffer != NULL);
>> +
>> +		offset = 0;
>> +	}
> 
> To determine if the lcore_buffer memory should be allocated, why not just check if lcore_buffer == NULL?

Because it may be the case, lcore_buffer is non-NULL but the remaining 
space is too small to service the allocation.

> Then offset wouldn't need an initial value of RTE_MAX_LCORE_VAR.
> 
>> +
>> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
>> +
>> +	offset += size;
>> +
>> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
>> +		memset(value, 0, size);
>> +
>> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>> +		"%"PRIuPTR"-byte alignment", size, align);
>> +
>> +	return handle;
>> +}
> 
> 
>> +/**
>> + * @file
>> + *
>> + * RTE Per-lcore id variables
> 
> Suggest mentioning the short form too, e.g.:
> "RTE Per-lcore id variables (RTE Lcore variables)"

What about just "RTE Lcore variables"?

Exactly what they are is thoroughly described in the text that follows.

> 
>> + *
>> + * This API provides a mechanism to create and access per-lcore id
>> + * variables in a space- and cycle-efficient manner.
>> + *
>> + * A per-lcore id variable (or lcore variable for short) has one value
>> + * for each EAL thread and registered non-EAL thread.
> 
> And service thread.

Service threads are EAL threads, or, at a bare minimum, must have a 
lcore id, and thus must be registered.

> 
>> + * There is one
>> + * copy for each current and future lcore id-equipped thread, with the
> 
> "one copy" -> "one instance"
> 

Fixed.

>> + * total number of copies amounting to @c RTE_MAX_LCORE. The value of
> 
> "copies" -> "instances"
> 

OK, I'll rephrase that sentence.

>> + * an lcore variable for a particular lcore id is independent from
>> + * other values (for other lcore ids) within the same lcore variable.
>> + *
>> + * In order to access the values of an lcore variable, a handle is
>> + * used. The type of the handle is a pointer to the value's type
>> + * (e.g., for @c uint32_t lcore variable, the handle is a
>> + * <code>uint32_t *</code>. The handler type is used to inform the
> 
> Typo: "handler" -> "handle", I think :-/
> Found this typo multiple times; search-replace.

Fixed.

> 
>> + * access macros the type of the values. A handle may be passed
>> + * between modules and threads just like any pointer, but its value
>> + * must be treated as a an opaque identifier. An allocated handle
>> + * never has the value NULL.
>> + *
>> + * @b Creation
>> + *
>> + * An lcore variable is created in two steps:
>> + *  1. Define a lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
>> + *  2. Allocate lcore variable storage and initialize the handle with
>> + *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
>> + *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
>> + *     module initialization, but may be done at any time.
>> + *
>> + * An lcore variable is not tied to the owning thread's lifetime. It's
>> + * available for use by any thread immediately after having been
>> + * allocated, and continues to be available throughout the lifetime of
>> + * the EAL.
>> + *
>> + * Lcore variables cannot and need not be freed.
>> + *
>> + * @b Access
>> + *
>> + * The value of any lcore variable for any lcore id may be accessed
>> + * from any thread (including unregistered threads), but it should
>> + * only be *frequently* read from or written to by the owner.
>> + *
>> + * Values of the same lcore variable but owned by to different lcore
> 
> Typo: to -> two
> 

Fixed.

>> + * ids may be frequently read or written by the owners without risking
>> + * false sharing.
>> + *
>> + * An appropriate synchronization mechanism (e.g., atomic loads and
>> + * stores) should employed to assure there are no data races between
>> + * the owning thread and any non-owner threads accessing the same
>> + * lcore variable instance.
>> + *
>> + * The value of the lcore variable for a particular lcore id is
>> + * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
>> + *
>> + * A common pattern is for an EAL thread or a registered non-EAL
>> + * thread to access its own lcore variable value. For this purpose, a
>> + * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
>> + *
>> + * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
>> + * pointer with the same type as the value, it may not be directly
>> + * dereferenced and must be treated as an opaque identifier.
>> + *
>> + * Lcore variable handles and value pointers may be freely passed
>> + * between different threads.
>> + *
>> + * @b Storage
>> + *
>> + * An lcore variable's values may by of a primitive type like @c int,
> 
> Two typos: "values may by" -> "value may be"
> 

That's not a typo. An lcore variable take on multiple values, one for 
each lcore id. That said, I guess you could refer to the whole thing 
(the set of values) as the "value" as well.

>> + * but would more typically be a @c struct.
>> + *
>> + * The lcore variable handle introduces a per-variable (not
>> + * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
>> + * there are some memory footprint gains to be made by organizing all
>> + * per-lcore id data for a particular module as one lcore variable
>> + * (e.g., as a struct).
>> + *
>> + * An application may choose to define an lcore variable handle, which
>> + * it then it goes on to never allocate.
>> + *
>> + * The size of a lcore variable's value must be less than the DPDK
>> + * build-time constant @c RTE_MAX_LCORE_VAR.
>> + *
>> + * The lcore variable are stored in a series of lcore buffers, which
>> + * are allocated from the libc heap. Heap allocation failures are
>> + * treated as fatal.
>> + *
>> + * Lcore variables should generally *not* be @ref __rte_cache_aligned
>> + * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
>> + * of these constructs are designed to avoid false sharing. In the
>> + * case of an lcore variable instance, the thread most recently
>> + * accessing nearby data structures should almost-always the lcore
> 
> Missing word: should almost-always *be* the lcore variables' owner.
> 

Fixed.

> 
>> + * variables' owner. Adding padding will increase the effective memory
>> + * working set size, potentially reducing performance.
>> + *
>> + * Lcore variable values take on an initial value of zero.
>> + *
>> + * @b Example
>> + *
>> + * Below is an example of the use of an lcore variable:
>> + *
>> + * @code{.c}
>> + * struct foo_lcore_state {
>> + *         int a;
>> + *         long b;
>> + * };
>> + *
>> + * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
>> + *
>> + * long foo_get_a_plus_b(void)
>> + * {
>> + *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
>> + *
>> + *         return state->a + state->b;
>> + * }
>> + *
>> + * RTE_INIT(rte_foo_init)
>> + * {
>> + *         RTE_LCORE_VAR_ALLOC(lcore_states);
>> + *
>> + *         struct foo_lcore_state *state;
>> + *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
>> + *                 (initialize 'state')
> 
> Consider: (initialize 'state') -> /* initialize 'state' */
> 

I think I tried that, and it failed because the compiler didn't like 
nested comments.

>> + *         }
>> + *
>> + *         (other initialization)
> 
> Consider: (other initialization) -> /* other initialization */
> 
>> + * }
>> + * @endcode
>> + *
>> + *
>> + * @b Alternatives
>> + *
>> + * Lcore variables are designed to replace a pattern exemplified below:
>> + * @code{.c}
>> + * struct __rte_cache_aligned foo_lcore_state {
>> + *         int a;
>> + *         long b;
>> + *         RTE_CACHE_GUARD;
>> + * };
>> + *
>> + * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
>> + * @endcode
>> + *
>> + * This scheme is simple and effective, but has one drawback: the data
>> + * is organized so that objects related to all lcores for a particular
>> + * module is kept close in memory. At a bare minimum, this forces the
>> + * use of cache-line alignment to avoid false sharing. With CPU
> 
> Consider adding: use of *padding to* cache-line alignment
> My point here is:
> This sentence should somehow include the word "padding".

I'm not sure everyone thinks about __rte_cache_aligned or cache-aligned 
heap allocations as "padded."

> This paragraph is not only aboud cache line alignment, it is primarily about padding.
> 

"At a bare minimum, this requires sizing data structures (e.g., using 
`__rte_cache_aligned`) to an even number of cache lines to avoid false 
sharing."

How about this?

>> + * hardware prefetching and memory loads resulting from speculative
>> + * execution (functions which seemingly are getting more eager faster
>> + * than they are getting more intelligent), one or more "guard" cache
>> + * lines may be required to separate one lcore's data from another's.
>> + *
>> + * Lcore variables has the upside of working with, not against, the
> 
> Typo: has -> have
> 

Fixed.

>> + * CPU's assumptions and for example next-line prefetchers may well
>> + * work the way its designers intended (i.e., to the benefit, not
>> + * detriment, of system performance).
>> + *
>> + * Another alternative to @ref rte_lcore_var.h is the @ref
>> + * rte_per_lcore.h API, which make use of thread-local storage (TLS,
> 
> Typo: make -> makes >

Fixed.

>> + * e.g., GCC __thread or C11 _Thread_local). The main differences
>> + * between by using the various forms of TLS (e.g., @ref
>> + * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
>> + * variables are:
>> + *
>> + *   * The existence and non-existence of a thread-local variable
>> + *     instance follow that of particular thread's. The data cannot be
> 
> Typo: "thread's" -> "threads", I think. :-/
> 

It's not a typo.

>> + *     accessed before the thread has been created, nor after it has
>> + *     exited. As a result, thread-local variables must initialized in
> 
> Missing word: must *be* initialized
> 

Fixed.

>> + *     a "lazy" manner (e.g., at the point of thread creation). Lcore
>> + *     variables may be accessed immediately after having been
>> + *     allocated (which may be prior any thread beyond the main
>> + *     thread is running).
>> + *   * A thread-local variable is duplicated across all threads in the
>> + *     process, including unregistered non-EAL threads (i.e.,
>> + *     "regular" threads). For DPDK applications heavily relying on
>> + *     multi-threading (in conjunction to DPDK's "one thread per core"
>> + *     pattern), either by having many concurrent threads or
>> + *     creating/destroying threads at a high rate, an excessive use of
>> + *     thread-local variables may cause inefficiencies (e.g.,
>> + *     increased thread creation overhead due to thread-local storage
>> + *     initialization or increased total RAM footprint usage). Lcore
>> + *     variables *only* exist for threads with an lcore id.
>> + *   * If data in thread-local storage may be shared between threads
>> + *     (i.e., can a pointer to a thread-local variable be passed to
>> + *     and successfully dereferenced by non-owning thread) depends on
>> + *     the details of the TLS implementation. With GCC __thread and
>> + *     GCC _Thread_local, such data sharing is supported. In the C11
>> + *     standard, the result of accessing another thread's
>> + *     _Thread_local object is implementation-defined. Lcore variable
>> + *     instances may be accessed reliably by any thread.
>> + */
>> +
>> +#include <stddef.h>
>> +#include <stdalign.h>
>> +
>> +#include <rte_common.h>
>> +#include <rte_config.h>
>> +#include <rte_lcore.h>
>> +
>> +#ifdef __cplusplus
>> +extern "C" {
>> +#endif
>> +
>> +/**
>> + * Given the lcore variable type, produces the type of the lcore
>> + * variable handle.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
>> +	type *
>> +
>> +/**
>> + * Define a lcore variable handle.
> 
> Typo: "a lcore" -> "an lcore"
> Found this typo multiple times; search-replace "a lcore".
> 

Yes, fixed.

>> + *
>> + * This macro defines a variable which is used as a handle to access
>> + * the various per-lcore id instances of a per-lcore id variable.
> 
> Suggest:
> "the various per-lcore id instances of a per-lcore id variable" ->
> "the various instances of a per-lcore id variable" >

Sounds good.

>> + *
>> + * The aim with this macro is to make clear at the point of
>> + * declaration that this is an lcore handler, rather than a regular
>> + * pointer.
>> + *
>> + * Add @b static as a prefix in case the lcore variable are only to be
> 
> Typo: are -> is
> 

Fixed.

>> + * accessed from a particular translation unit.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
>> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle.
>> + *
>> + * The values of the lcore variable are initialized to zero.
> 
> Consider adding: "the lcore variable *instances* are initialized"
> Found this typo multiple times; search-replace.
> 

It's not a typo. "Values" is just short for "instances of the value", 
just like "instances" is. Using instances everywhere may confuse the 
reader that an instance both a name and a value, which is not the case. 
I don't know, maybe I should be using "values" everywhere instead of 
"instances".

I agree there's some lack of consistency here and potential room for 
improvement, but I'm not sure exactly how improvement looks like.

>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
>> +	handle = rte_lcore_var_alloc(size, align)
>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle,
>> + * with values aligned for any type of object.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
>> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
>> +
>> +/**
>> + * Allocate space for an lcore variable of the size and alignment
>> requirements
>> + * suggested by the handler pointer type, and initialize its handle.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC(handle)					\
>> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
>> +				       alignof(typeof(*(handle))))
>> +
>> +/**
>> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
>> + * means of a @ref RTE_INIT constructor.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
>> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
>> +	{								\
>> +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
>> +	}
>> +
>> +/**
>> + * Allocate an explicitly-sized lcore variable by means of a @ref
>> + * RTE_INIT constructor.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
>> +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
>> +
>> +/**
>> + * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_INIT(name)					\
>> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
>> +	{								\
>> +		RTE_LCORE_VAR_ALLOC(name);				\
>> +	}
>> +
>> +/**
>> + * Get void pointer to lcore variable instance with the specified
>> + * lcore id.
>> + *
>> + * @param lcore_id
>> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
>> + *   instances should be accessed. The lcore id need not be valid
>> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
>> + *   is also not valid (and thus should not be dereferenced).
>> + * @param handle
>> + *   The lcore variable handle.
>> + */
>> +static inline void *
>> +rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
>> +{
>> +	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
>> +}
>> +
>> +/**
>> + * Get pointer to lcore variable instance with the specified lcore id.
>> + *
>> + * @param lcore_id
>> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
>> + *   instances should be accessed. The lcore id need not be valid
>> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
>> + *   is also not valid (and thus should not be dereferenced).
>> + * @param handle
>> + *   The lcore variable handle.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
>> +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
>> +
>> +/**
>> + * Get pointer to lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_VALUE(handle) \
>> +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
>> +
>> +/**
>> + * Iterate over each lcore id's value for a lcore variable.
>> + *
>> + * @param value
>> + *   A pointer set successivly set to point to lcore variable value
> 
> "set successivly set" -> "successivly set"
> 
> Thinking out loud, ignore at your preference:
> During the RFC discussions, the term used for referring to an lcore variable was discussed;
> we considered "pointer", but settled for "value".
> Perhaps "instance" would be usable in comments like like the one describing this function...
> "A pointer set successivly set to point to lcore variable value" ->
> "A pointer set successivly set to point to lcore variable instance".
> I don't know.
> 

I also don't know.

> 
>> + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
>> + * @param handle
>> + *   The lcore variable handle.
>> + */
>> +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
>> +	for (unsigned int lcore_id =					\
>> +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
>> +	     lcore_id < RTE_MAX_LCORE;					\
>> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
>> +
>> +/**
>> + * Allocate space in the per-lcore id buffers for a lcore variable.
>> + *
>> + * The pointer returned is only an opaque identifer of the variable. To
>> + * get an actual pointer to a particular instance of the variable use
>> + * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
>> + *
>> + * The lcore variable values' memory is set to zero.
>> + *
>> + * The allocation is always successful, barring a fatal exhaustion of
>> + * the per-lcore id buffer space.
>> + *
>> + * rte_lcore_var_alloc() is not multi-thread safe.
>> + *
>> + * @param size
>> + *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
>> + * @param align
>> + *   If 0, the values will be suitably aligned for any kind of type
>> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
>> + *   on a multiple of *align*, which must be a power of 2 and equal or
>> + *   less than @c RTE_CACHE_LINE_SIZE.
>> + * @return
>> + *   The id of the variable, stored in a void pointer value. The value
> 
> "id" -> "handle"
> 

Fixed.

>> + *   is always non-NULL.
>> + */
>> +__rte_experimental
>> +void *
>> +rte_lcore_var_alloc(size_t size, size_t align);
>> +
>> +#ifdef __cplusplus
>> +}
>> +#endif
>> +
>> +#endif /* _RTE_LCORE_VAR_H_ */
>> diff --git a/lib/eal/version.map b/lib/eal/version.map
>> index e3ff412683..5f5a3522c0 100644
>> --- a/lib/eal/version.map
>> +++ b/lib/eal/version.map
>> @@ -396,6 +396,9 @@ EXPERIMENTAL {
>>
>>   	# added in 24.03
>>   	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
>> +
>> +	rte_lcore_var_alloc;
>> +	rte_lcore_var;
> 
> No such function: rte_lcore_var

Indeed. That variable is gone. Fixed.

Thanks a lot of your review Morten.

> 
>>   };
>>
>>   INTERNAL {
>> --
>> 2.34.1
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-10 10:44                             ` Mattias Rönnblom
@ 2024-09-10 13:07                               ` Morten Brørup
  2024-09-10 15:55                               ` Stephen Hemminger
  1 sibling, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-09-10 13:07 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Tuesday, 10 September 2024 12.45
> 
> On 2024-09-10 11:32, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Tuesday, 10 September 2024 09.04
> >>
> >> Introduce DPDK per-lcore id variables, or lcore variables for short.
> >
> > Throughout the descriptions and comments,
> > please replace "lcore id" with "lcore" (e.g. "per-lcore variables"),
> > when referring to the lcore, and not the index of the lcore.
> > (Your intention might be to highlight that it only covers threads with
> an lcore id,
> > but if that wasn't the case, you would refer to them as "threads" not
> "lcores".)
> > Except, of course, when referring to an actual lcore id, e.g. lcore_id
> function parameters.
> 
> "lcore" is just another word for "EAL thread." The lcore variables exist
> in one instance for every thread with an lcore id, thus also for
> registered non-EAL threads (i.e., threads which are not lcores).
> 
> I've tried to summarize the (very confusing) terminology of DPDK's
> threading model here:
> https://ericsson.github.io/dataplanebook/threading/threading.html#eal-
> threads
> 
> So, in my world, "per-lcore id variables" is pretty accurate. You could
> say "variables with per-lcore id values" if you want to make it even
> more clear, what's going on.

With your reference terminology in mind, "per-lcore id variables" is OK with me.

<rant>
DPDK's lcore terminology has drifted quite far away from its original 1:1 meaning, but I'm not going to try to clean it up.
It also seems the meaning of "socket" is drifting.

And the DPDK's project's API/API compatibility ambitions seem to favor bolting on new features to the pile, rather than replacing APIs that have grown misleading with new APIs serving new requirements.
</rant>

> 
> >
> > Paraphrasing:
> > Consider the type of what you are referring to;
> > use "lcore" if its type is "thread", and
> > use "lcore id" if its type is "int".
> >
> > I might be wrong here, but please think hard about it.
> >
> >>
> >> An lcore variable has one value for every current and future lcore
> >> id-equipped thread.
> >>
> >> The primary <rte_lcore_var.h> use case is for statically allocating
> >> small, frequently-accessed data structures, for which one instance
> >> should exist for each lcore.
> >>
> >> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> >> _Thread_local), but decoupling the values' life time with that of the
> >> threads.
> >>
> >> Lcore variables are also similar in terms of functionality provided
> by
> >> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >> build-time machinery. DPCPU uses linker scripts, which effectively
> >> prevents the reuse of its, otherwise seemingly viable, approach.
> >>
> >> The currently-prevailing way to solve the same problem as lcore
> >> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> >> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >> lcore variables over this approach is that data related to the same
> >> lcore now is close (spatially, in memory), rather than data used by
> >> the same module, which in turn avoid excessive use of padding,
> >> polluting caches with unused data.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >>
> >> --
> >
> >> +++ b/doc/api/doxy-api-index.md
> >> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
> >>     [interrupts](@ref rte_interrupts.h),
> >>     [launch](@ref rte_launch.h),
> >>     [lcore](@ref rte_lcore.h),
> >> +  [lcore-varible](@ref rte_lcore_var.h),
> >
> > Typo: varible -> variable
> >
> >
> 
> I'll change it to "lcore variables" (no dash, plural).

+1

> 
> >> +++ b/doc/guides/rel_notes/release_24_11.rst
> >> @@ -55,6 +55,20 @@ New Features
> >>        Also, make sure to start the actual text at the margin.
> >>        =======================================================
> >>
> >> +* **Added EAL per-lcore static memory allocation facility.**
> >> +
> >> +    Added EAL API <rte_lcore_var.h> for statically allocating small,
> >> +    frequently-accessed data structures, for which one instance
> should
> >> +    exist for each lcore.
> >> +
> >> +    With lcore variables, data is organized spatially on a per-lcore
> >> +    basis, rather than per library or PMD, avoiding the need for
> cache
> >> +    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
> >> +    reduces CPU cache internal fragmentation, improving performance.
> >> +
> >> +    Lcore variables are similar to thread-local storage (TLS, e.g.,
> >> +    C11 _Thread_local), but decoupling the values' life time from
> that
> >> +    of the threads.
> >
> > When referring to TLS, you might want to clarify that lcore variables
> are not instantiated for unregistered threads.
> >
> 
> Isn't that clear from the first paragraph? Although it should say "per
> lcore id", rather than "per lcore."

Yes, almost.
But in this paragraph, when you mention that they are similar to TLS, someone might not catch that it still applies (that they only are instantiated for lcores and not all threads). So clarify one extra time, just to ensure that everyone gets it.

> 
> >
> >> +static void *lcore_buffer;
> >> +static size_t offset = RTE_MAX_LCORE_VAR;
> >> +
> >> +static void *
> >> +lcore_var_alloc(size_t size, size_t align)
> >> +{
> >> +	void *handle;
> >> +	void *value;
> >> +
> >> +	offset = RTE_ALIGN_CEIL(offset, align);
> >> +
> >> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> >> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >> +					     LCORE_BUFFER_SIZE);
> >> +		RTE_VERIFY(lcore_buffer != NULL);
> >> +
> >> +		offset = 0;
> >> +	}
> >
> > To determine if the lcore_buffer memory should be allocated, why not
> just check if lcore_buffer == NULL?
> 
> Because it may be the case, lcore_buffer is non-NULL but the remaining
> space is too small to service the allocation.

There's no error handling of that case. You simply forget about the allocated memory, and behave like initial allocation/initialization.

> 
> > Then offset wouldn't need an initial value of RTE_MAX_LCORE_VAR.
> >
> >> +
> >> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> >> +
> >> +	offset += size;
> >> +
> >> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> >> +		memset(value, 0, size);
> >> +
> >> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with
> a "
> >> +		"%"PRIuPTR"-byte alignment", size, align);
> >> +
> >> +	return handle;
> >> +}
> >
> >
> >> +/**
> >> + * @file
> >> + *
> >> + * RTE Per-lcore id variables
> >
> > Suggest mentioning the short form too, e.g.:
> > "RTE Per-lcore id variables (RTE Lcore variables)"
> 
> What about just "RTE Lcore variables"?

+1

> 
> Exactly what they are is thoroughly described in the text that follows.
> 
> >
> >> + *
> >> + * This API provides a mechanism to create and access per-lcore id
> >> + * variables in a space- and cycle-efficient manner.
> >> + *
> >> + * A per-lcore id variable (or lcore variable for short) has one
> value
> >> + * for each EAL thread and registered non-EAL thread.
> >
> > And service thread.
> 
> Service threads are EAL threads, or, at a bare minimum, must have a
> lcore id, and thus must be registered.

Service threads have an lcore id, yes, but they have rte_lcore_role_t enum value ROLE_SERVICE, which differs from that of EAL threads (ROLE_EAL). Registered non-EAL threads have yet another role, ROLE_NON_EAL.

> 
> >
> >> + * There is one
> >> + * copy for each current and future lcore id-equipped thread, with
> the
> >
> > "one copy" -> "one instance"
> >
> 
> Fixed.
> 
> >> + * total number of copies amounting to @c RTE_MAX_LCORE. The value
> of
> >
> > "copies" -> "instances"
> >
> 
> OK, I'll rephrase that sentence.
> 
> >> + * an lcore variable for a particular lcore id is independent from
> >> + * other values (for other lcore ids) within the same lcore
> variable.
> >> + *
> >> + * In order to access the values of an lcore variable, a handle is
> >> + * used. The type of the handle is a pointer to the value's type
> >> + * (e.g., for @c uint32_t lcore variable, the handle is a
> >> + * <code>uint32_t *</code>. The handler type is used to inform the
> >
> > Typo: "handler" -> "handle", I think :-/
> > Found this typo multiple times; search-replace.
> 
> Fixed.
> 
> >
> >> + * access macros the type of the values. A handle may be passed
> >> + * between modules and threads just like any pointer, but its value
> >> + * must be treated as a an opaque identifier. An allocated handle
> >> + * never has the value NULL.
> >> + *
> >> + * @b Creation
> >> + *
> >> + * An lcore variable is created in two steps:
> >> + *  1. Define a lcore variable handle by using @ref
> RTE_LCORE_VAR_HANDLE.
> >> + *  2. Allocate lcore variable storage and initialize the handle
> with
> >> + *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
> >> + *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time
> of
> >> + *     module initialization, but may be done at any time.
> >> + *
> >> + * An lcore variable is not tied to the owning thread's lifetime.
> It's
> >> + * available for use by any thread immediately after having been
> >> + * allocated, and continues to be available throughout the lifetime
> of
> >> + * the EAL.
> >> + *
> >> + * Lcore variables cannot and need not be freed.
> >> + *
> >> + * @b Access
> >> + *
> >> + * The value of any lcore variable for any lcore id may be accessed
> >> + * from any thread (including unregistered threads), but it should
> >> + * only be *frequently* read from or written to by the owner.
> >> + *
> >> + * Values of the same lcore variable but owned by to different lcore
> >
> > Typo: to -> two
> >
> 
> Fixed.
> 
> >> + * ids may be frequently read or written by the owners without
> risking
> >> + * false sharing.
> >> + *
> >> + * An appropriate synchronization mechanism (e.g., atomic loads and
> >> + * stores) should employed to assure there are no data races between
> >> + * the owning thread and any non-owner threads accessing the same
> >> + * lcore variable instance.
> >> + *
> >> + * The value of the lcore variable for a particular lcore id is
> >> + * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
> >> + *
> >> + * A common pattern is for an EAL thread or a registered non-EAL
> >> + * thread to access its own lcore variable value. For this purpose,
> a
> >> + * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
> >> + *
> >> + * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is
> a
> >> + * pointer with the same type as the value, it may not be directly
> >> + * dereferenced and must be treated as an opaque identifier.
> >> + *
> >> + * Lcore variable handles and value pointers may be freely passed
> >> + * between different threads.
> >> + *
> >> + * @b Storage
> >> + *
> >> + * An lcore variable's values may by of a primitive type like @c
> int,
> >
> > Two typos: "values may by" -> "value may be"
> >
> 
> That's not a typo. An lcore variable take on multiple values, one for
> each lcore id. That said, I guess you could refer to the whole thing
> (the set of values) as the "value" as well.

OK. Reading it the way you explain, I get it. No typo.

> 
> >> + * but would more typically be a @c struct.
> >> + *
> >> + * The lcore variable handle introduces a per-variable (not
> >> + * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
> >> + * there are some memory footprint gains to be made by organizing
> all
> >> + * per-lcore id data for a particular module as one lcore variable
> >> + * (e.g., as a struct).
> >> + *
> >> + * An application may choose to define an lcore variable handle,
> which
> >> + * it then it goes on to never allocate.
> >> + *
> >> + * The size of a lcore variable's value must be less than the DPDK
> >> + * build-time constant @c RTE_MAX_LCORE_VAR.
> >> + *
> >> + * The lcore variable are stored in a series of lcore buffers, which
> >> + * are allocated from the libc heap. Heap allocation failures are
> >> + * treated as fatal.
> >> + *
> >> + * Lcore variables should generally *not* be @ref
> __rte_cache_aligned
> >> + * and need *not* include a @ref RTE_CACHE_GUARD field, since the
> use
> >> + * of these constructs are designed to avoid false sharing. In the
> >> + * case of an lcore variable instance, the thread most recently
> >> + * accessing nearby data structures should almost-always the lcore
> >
> > Missing word: should almost-always *be* the lcore variables' owner.
> >
> 
> Fixed.
> 
> >
> >> + * variables' owner. Adding padding will increase the effective
> memory
> >> + * working set size, potentially reducing performance.
> >> + *
> >> + * Lcore variable values take on an initial value of zero.
> >> + *
> >> + * @b Example
> >> + *
> >> + * Below is an example of the use of an lcore variable:
> >> + *
> >> + * @code{.c}
> >> + * struct foo_lcore_state {
> >> + *         int a;
> >> + *         long b;
> >> + * };
> >> + *
> >> + * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state,
> lcore_states);
> >> + *
> >> + * long foo_get_a_plus_b(void)
> >> + * {
> >> + *         struct foo_lcore_state *state =
> RTE_LCORE_VAR_VALUE(lcore_states);
> >> + *
> >> + *         return state->a + state->b;
> >> + * }
> >> + *
> >> + * RTE_INIT(rte_foo_init)
> >> + * {
> >> + *         RTE_LCORE_VAR_ALLOC(lcore_states);
> >> + *
> >> + *         struct foo_lcore_state *state;
> >> + *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
> >> + *                 (initialize 'state')
> >
> > Consider: (initialize 'state') -> /* initialize 'state' */
> >
> 
> I think I tried that, and it failed because the compiler didn't like
> nested comments.

OK, no objections. Just leave it as is.

> 
> >> + *         }
> >> + *
> >> + *         (other initialization)
> >
> > Consider: (other initialization) -> /* other initialization */
> >
> >> + * }
> >> + * @endcode
> >> + *
> >> + *
> >> + * @b Alternatives
> >> + *
> >> + * Lcore variables are designed to replace a pattern exemplified
> below:
> >> + * @code{.c}
> >> + * struct __rte_cache_aligned foo_lcore_state {
> >> + *         int a;
> >> + *         long b;
> >> + *         RTE_CACHE_GUARD;
> >> + * };
> >> + *
> >> + * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
> >> + * @endcode
> >> + *
> >> + * This scheme is simple and effective, but has one drawback: the
> data
> >> + * is organized so that objects related to all lcores for a
> particular
> >> + * module is kept close in memory. At a bare minimum, this forces
> the
> >> + * use of cache-line alignment to avoid false sharing. With CPU
> >
> > Consider adding: use of *padding to* cache-line alignment
> > My point here is:
> > This sentence should somehow include the word "padding".
> 
> I'm not sure everyone thinks about __rte_cache_aligned or cache-aligned
> heap allocations as "padded."
> 
> > This paragraph is not only aboud cache line alignment, it is primarily
> about padding.
> >
> 
> "At a bare minimum, this requires sizing data structures (e.g., using
> `__rte_cache_aligned`) to an even number of cache lines to avoid false
> sharing."
> 
> How about this?

OK. Sizing might imply padding, so it serves the point I was targeting.
But "even number" -> "whole number". The number might be odd. :-)

> 
> >> + * hardware prefetching and memory loads resulting from speculative
> >> + * execution (functions which seemingly are getting more eager
> faster
> >> + * than they are getting more intelligent), one or more "guard"
> cache
> >> + * lines may be required to separate one lcore's data from
> another's.
> >> + *
> >> + * Lcore variables has the upside of working with, not against, the
> >
> > Typo: has -> have
> >
> 
> Fixed.
> 
> >> + * CPU's assumptions and for example next-line prefetchers may well
> >> + * work the way its designers intended (i.e., to the benefit, not
> >> + * detriment, of system performance).
> >> + *
> >> + * Another alternative to @ref rte_lcore_var.h is the @ref
> >> + * rte_per_lcore.h API, which make use of thread-local storage (TLS,
> >
> > Typo: make -> makes >
> 
> Fixed.
> 
> >> + * e.g., GCC __thread or C11 _Thread_local). The main differences
> >> + * between by using the various forms of TLS (e.g., @ref
> >> + * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
> >> + * variables are:
> >> + *
> >> + *   * The existence and non-existence of a thread-local variable
> >> + *     instance follow that of particular thread's. The data cannot
> be
> >
> > Typo: "thread's" -> "threads", I think. :-/
> >
> 
> It's not a typo.

OK.

> 
> >> + *     accessed before the thread has been created, nor after it has
> >> + *     exited. As a result, thread-local variables must initialized
> in
> >
> > Missing word: must *be* initialized
> >
> 
> Fixed.
> 
> >> + *     a "lazy" manner (e.g., at the point of thread creation).
> Lcore
> >> + *     variables may be accessed immediately after having been
> >> + *     allocated (which may be prior any thread beyond the main
> >> + *     thread is running).
> >> + *   * A thread-local variable is duplicated across all threads in
> the
> >> + *     process, including unregistered non-EAL threads (i.e.,
> >> + *     "regular" threads). For DPDK applications heavily relying on
> >> + *     multi-threading (in conjunction to DPDK's "one thread per
> core"
> >> + *     pattern), either by having many concurrent threads or
> >> + *     creating/destroying threads at a high rate, an excessive use
> of
> >> + *     thread-local variables may cause inefficiencies (e.g.,
> >> + *     increased thread creation overhead due to thread-local
> storage
> >> + *     initialization or increased total RAM footprint usage). Lcore
> >> + *     variables *only* exist for threads with an lcore id.
> >> + *   * If data in thread-local storage may be shared between threads
> >> + *     (i.e., can a pointer to a thread-local variable be passed to
> >> + *     and successfully dereferenced by non-owning thread) depends
> on
> >> + *     the details of the TLS implementation. With GCC __thread and
> >> + *     GCC _Thread_local, such data sharing is supported. In the C11
> >> + *     standard, the result of accessing another thread's
> >> + *     _Thread_local object is implementation-defined. Lcore
> variable
> >> + *     instances may be accessed reliably by any thread.
> >> + */
> >> +
> >> +#include <stddef.h>
> >> +#include <stdalign.h>
> >> +
> >> +#include <rte_common.h>
> >> +#include <rte_config.h>
> >> +#include <rte_lcore.h>
> >> +
> >> +#ifdef __cplusplus
> >> +extern "C" {
> >> +#endif
> >> +
> >> +/**
> >> + * Given the lcore variable type, produces the type of the lcore
> >> + * variable handle.
> >> + */
> >> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
> >> +	type *
> >> +
> >> +/**
> >> + * Define a lcore variable handle.
> >
> > Typo: "a lcore" -> "an lcore"
> > Found this typo multiple times; search-replace "a lcore".
> >
> 
> Yes, fixed.
> 
> >> + *
> >> + * This macro defines a variable which is used as a handle to access
> >> + * the various per-lcore id instances of a per-lcore id variable.
> >
> > Suggest:
> > "the various per-lcore id instances of a per-lcore id variable" ->
> > "the various instances of a per-lcore id variable" >
> 
> Sounds good.
> 
> >> + *
> >> + * The aim with this macro is to make clear at the point of
> >> + * declaration that this is an lcore handler, rather than a regular
> >> + * pointer.
> >> + *
> >> + * Add @b static as a prefix in case the lcore variable are only to
> be
> >
> > Typo: are -> is
> >
> 
> Fixed.
> 
> >> + * accessed from a particular translation unit.
> >> + */
> >> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> >> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> >> +
> >> +/**
> >> + * Allocate space for an lcore variable, and initialize its handle.
> >> + *
> >> + * The values of the lcore variable are initialized to zero.
> >
> > Consider adding: "the lcore variable *instances* are initialized"
> > Found this typo multiple times; search-replace.
> >
> 
> It's not a typo. "Values" is just short for "instances of the value",
> just like "instances" is. Using instances everywhere may confuse the
> reader that an instance both a name and a value, which is not the case.
> I don't know, maybe I should be using "values" everywhere instead of
> "instances".
> 
> I agree there's some lack of consistency here and potential room for
> improvement, but I'm not sure exactly how improvement looks like.

Yes, perhaps using "values" (instead of "instances of the value") everywhere,
and avoiding "instances", might be better.

If you repeat/paraphrase your above explanation in the documentation and/or source code, it should cover it.

> 
> >> + */
> >> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
> >> +	handle = rte_lcore_var_alloc(size, align)
> >> +
> >> +/**
> >> + * Allocate space for an lcore variable, and initialize its handle,
> >> + * with values aligned for any type of object.
> >> + *
> >> + * The values of the lcore variable are initialized to zero.
> >> + */
> >> +#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
> >> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
> >> +
> >> +/**
> >> + * Allocate space for an lcore variable of the size and alignment
> >> requirements
> >> + * suggested by the handler pointer type, and initialize its handle.
> >> + *
> >> + * The values of the lcore variable are initialized to zero.
> >> + */
> >> +#define RTE_LCORE_VAR_ALLOC(handle)					\
> >> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
> >> +				       alignof(typeof(*(handle))))
> >> +
> >> +/**
> >> + * Allocate an explicitly-sized, explicitly-aligned lcore variable
> by
> >> + * means of a @ref RTE_INIT constructor.
> >> + *
> >> + * The values of the lcore variable are initialized to zero.
> >> + */
> >> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
> >> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> >> +	{								\
> >> +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
> >> +	}
> >> +
> >> +/**
> >> + * Allocate an explicitly-sized lcore variable by means of a @ref
> >> + * RTE_INIT constructor.
> >> + *
> >> + * The values of the lcore variable are initialized to zero.
> >> + */
> >> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
> >> +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
> >> +
> >> +/**
> >> + * Allocate an lcore variable by means of a @ref RTE_INIT
> constructor.
> >> + *
> >> + * The values of the lcore variable are initialized to zero.
> >> + */
> >> +#define RTE_LCORE_VAR_INIT(name)					\
> >> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> >> +	{								\
> >> +		RTE_LCORE_VAR_ALLOC(name);				\
> >> +	}
> >> +
> >> +/**
> >> + * Get void pointer to lcore variable instance with the specified
> >> + * lcore id.
> >> + *
> >> + * @param lcore_id
> >> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> >> + *   instances should be accessed. The lcore id need not be valid
> >> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the
> pointer
> >> + *   is also not valid (and thus should not be dereferenced).
> >> + * @param handle
> >> + *   The lcore variable handle.
> >> + */
> >> +static inline void *
> >> +rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
> >> +{
> >> +	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
> >> +}
> >> +
> >> +/**
> >> + * Get pointer to lcore variable instance with the specified lcore
> id.
> >> + *
> >> + * @param lcore_id
> >> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> >> + *   instances should be accessed. The lcore id need not be valid
> >> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the
> pointer
> >> + *   is also not valid (and thus should not be dereferenced).
> >> + * @param handle
> >> + *   The lcore variable handle.
> >> + */
> >> +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)
> 	\
> >> +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
> >> +
> >> +/**
> >> + * Get pointer to lcore variable instance of the current thread.
> >> + *
> >> + * May only be used by EAL threads and registered non-EAL threads.
> >> + */
> >> +#define RTE_LCORE_VAR_VALUE(handle) \
> >> +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> >> +
> >> +/**
> >> + * Iterate over each lcore id's value for a lcore variable.
> >> + *
> >> + * @param value
> >> + *   A pointer set successivly set to point to lcore variable value
> >
> > "set successivly set" -> "successivly set"

Don't forget.

> >
> > Thinking out loud, ignore at your preference:
> > During the RFC discussions, the term used for referring to an lcore
> variable was discussed;
> > we considered "pointer", but settled for "value".
> > Perhaps "instance" would be usable in comments like like the one
> describing this function...
> > "A pointer set successivly set to point to lcore variable value" ->
> > "A pointer set successivly set to point to lcore variable instance".
> > I don't know.
> >
> 
> I also don't know.

Referring to the terminology above, if you go for "value" rather than "instance" (or "instance of the value"), stick with "value" here too.

> 
> >
> >> + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
> >> + * @param handle
> >> + *   The lcore variable handle.
> >> + */
> >> +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
> >> +	for (unsigned int lcore_id =					\
> >> +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0);
> \
> >> +	     lcore_id < RTE_MAX_LCORE;					\
> >> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
> handle))
> >> +
> >> +/**
> >> + * Allocate space in the per-lcore id buffers for a lcore variable.
> >> + *
> >> + * The pointer returned is only an opaque identifer of the variable.
> To
> >> + * get an actual pointer to a particular instance of the variable
> use
> >> + * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
> >> + *
> >> + * The lcore variable values' memory is set to zero.
> >> + *
> >> + * The allocation is always successful, barring a fatal exhaustion
> of
> >> + * the per-lcore id buffer space.
> >> + *
> >> + * rte_lcore_var_alloc() is not multi-thread safe.
> >> + *
> >> + * @param size
> >> + *   The size (in bytes) of the variable's per-lcore id value. Must
> be > 0.
> >> + * @param align
> >> + *   If 0, the values will be suitably aligned for any kind of type
> >> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be
> aligned
> >> + *   on a multiple of *align*, which must be a power of 2 and equal
> or
> >> + *   less than @c RTE_CACHE_LINE_SIZE.
> >> + * @return
> >> + *   The id of the variable, stored in a void pointer value. The
> value
> >
> > "id" -> "handle"
> >
> 
> Fixed.
> 
> >> + *   is always non-NULL.
> >> + */
> >> +__rte_experimental
> >> +void *
> >> +rte_lcore_var_alloc(size_t size, size_t align);
> >> +
> >> +#ifdef __cplusplus
> >> +}
> >> +#endif
> >> +
> >> +#endif /* _RTE_LCORE_VAR_H_ */
> >> diff --git a/lib/eal/version.map b/lib/eal/version.map
> >> index e3ff412683..5f5a3522c0 100644
> >> --- a/lib/eal/version.map
> >> +++ b/lib/eal/version.map
> >> @@ -396,6 +396,9 @@ EXPERIMENTAL {
> >>
> >>   	# added in 24.03
> >>   	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
> >> +
> >> +	rte_lcore_var_alloc;
> >> +	rte_lcore_var;
> >
> > No such function: rte_lcore_var
> 
> Indeed. That variable is gone. Fixed.
> 
> Thanks a lot of your review Morten.

Thanks a lot for your contribution, Mattias. :-)

> 
> >
> >>   };
> >>
> >>   INTERNAL {
> >> --
> >> 2.34.1
> >

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [RFC v6 0/6] Lcore variables
  2024-09-10  6:41                       ` Mattias Rönnblom
@ 2024-09-10 15:41                         ` Stephen Hemminger
  0 siblings, 0 replies; 185+ messages in thread
From: Stephen Hemminger @ 2024-09-10 15:41 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Morten Brørup, Mattias Rönnblom, dev, Konstantin Ananyev

On Tue, 10 Sep 2024 08:41:19 +0200
Mattias Rönnblom <hofors@lysator.liu.se> wrote:

> On 2024-09-02 16:42, Morten Brørup wrote:

On a related note, latest GCC supports annotating the address space
of variables. Kernel uses it for RCU.

It would be good if DPDK could do this for:
	- per lcore data
	- data in huge pages
	- data protected by rcu

With these annotations various checkers and compilers can warn about
places where data is passed (with cast to override).



^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-10 10:44                             ` Mattias Rönnblom
  2024-09-10 13:07                               ` Morten Brørup
@ 2024-09-10 15:55                               ` Stephen Hemminger
  1 sibling, 0 replies; 185+ messages in thread
From: Stephen Hemminger @ 2024-09-10 15:55 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Morten Brørup, Mattias Rönnblom, dev,
	Konstantin Ananyev, David Marchand, Nandini Persad

On Tue, 10 Sep 2024 12:44:49 +0200
Mattias Rönnblom <hofors@lysator.liu.se> wrote:

> "lcore" is just another word for "EAL thread." The lcore variables exist 
> in one instance for every thread with an lcore id, thus also for 
> registered non-EAL threads (i.e., threads which are not lcores).
> 
> I've tried to summarize the (very confusing) terminology of DPDK's 
> threading model here:
> https://ericsson.github.io/dataplanebook/threading/threading.html#eal-threads
> 
> So, in my world, "per-lcore id variables" is pretty accurate. You could 
> say "variables with per-lcore id values" if you want to make it even 
> more clear, what's going on.

This is good and should be in DPDK documentation along with references
to other Intel/Arm documentation.

I don't see a glossary section in current documentation.
The issue goes deeper there is no clear introduction in the current DPDK documentation.

My suggestion would be something similar to Fd.io VPP and other projects

	About DPDK
	- Introduction
	- Glossary
	- Supported platforms
	- Release notes
	- FAQ

	Getting stated
	- Getting started on Linux
	...
	- Sample Applications

	Developer documentation
	- Programmer’s Guide
	- HowTo Guides
	- DPDK Tools User Guides
	- Testpmd Application User Guide
	- Drivers
	    - Network Interface
	    - Baseband
		...

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-10  7:03                         ` [PATCH 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-10  9:32                           ` Morten Brørup
@ 2024-09-11 10:32                           ` Morten Brørup
  2024-09-11 15:05                             ` Mattias Rönnblom
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
  2 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-11 10:32 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Tyler Retzlaff

> +static void *lcore_buffer;
[...]
> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> +					     LCORE_BUFFER_SIZE);

Since lcore_buffer is never freed again, it is easy to support Windows:

#ifdef RTE_EXEC_ENV_WINDOWS
#include <malloc.h>
#endif

#ifndef RTE_EXEC_ENV_WINDOWS
lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
		LCORE_BUFFER_SIZE);
#else
/* Never freed again, so don't worry about _aligned_free(). */
lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
		RTE_CACHE_LINE_SIZE);
#endif

Ref:
https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/aligned-malloc?view=msvc-170

NB: Note the opposite parameter order.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-11 10:32                           ` Morten Brørup
@ 2024-09-11 15:05                             ` Mattias Rönnblom
  2024-09-11 15:07                               ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 15:05 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand, Tyler Retzlaff

On 2024-09-11 12:32, Morten Brørup wrote:
>> +static void *lcore_buffer;
> [...]
>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>> +					     LCORE_BUFFER_SIZE);
> 
> Since lcore_buffer is never freed again, it is easy to support Windows:
> 
> #ifdef RTE_EXEC_ENV_WINDOWS
> #include <malloc.h>
> #endif
> 
> #ifndef RTE_EXEC_ENV_WINDOWS
> lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> 		LCORE_BUFFER_SIZE);
> #else
> /* Never freed again, so don't worry about _aligned_free(). */

What is the reason for this comment? It seems like it addresses the 
Windows code path in particular.

> lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> 		RTE_CACHE_LINE_SIZE);
> #endif
> 
> Ref:
> https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/aligned-malloc?view=msvc-170
> 
> NB: Note the opposite parameter order.
> 

Thanks. I will add something like this.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH 1/6] eal: add static per-lcore memory allocation facility
  2024-09-11 15:05                             ` Mattias Rönnblom
@ 2024-09-11 15:07                               ` Morten Brørup
  0 siblings, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-09-11 15:07 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand, Tyler Retzlaff

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Wednesday, 11 September 2024 17.05
> 
> On 2024-09-11 12:32, Morten Brørup wrote:
> >> +static void *lcore_buffer;
> > [...]
> >> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >> +					     LCORE_BUFFER_SIZE);
> >
> > Since lcore_buffer is never freed again, it is easy to support
> Windows:
> >
> > #ifdef RTE_EXEC_ENV_WINDOWS
> > #include <malloc.h>
> > #endif
> >
> > #ifndef RTE_EXEC_ENV_WINDOWS
> > lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> > 		LCORE_BUFFER_SIZE);
> > #else
> > /* Never freed again, so don't worry about _aligned_free(). */
> 
> What is the reason for this comment? It seems like it addresses the
> Windows code path in particular.

It is Windows specific.
Memory allocated with _aligned_malloc() cannot be freed with free(); it needs to be freed with _aligned_free().

> 
> > lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> > 		RTE_CACHE_LINE_SIZE);
> > #endif
> >
> > Ref:
> > https://learn.microsoft.com/en-us/cpp/c-runtime-
> library/reference/aligned-malloc?view=msvc-170
> >
> > NB: Note the opposite parameter order.
> >
> 
> Thanks. I will add something like this.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v2 0/6] Lcore variables
  2024-09-10  7:03                         ` [PATCH 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-10  9:32                           ` Morten Brørup
  2024-09-11 10:32                           ` Morten Brørup
@ 2024-09-11 17:04                           ` Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                               ` (5 more replies)
  2 siblings, 6 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 17:04 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

This patch set introduces a new API <rte_lcore_var.h> for static
per-lcore id data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

Mattias Rönnblom (6):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable test suite
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 MAINTAINERS                            |   6 +
 app/test/meson.build                   |   1 +
 app/test/test_lcore_var.c              | 432 +++++++++++++++++++++++++
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  78 +++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/common/rte_random.c            |  28 +-
 lib/eal/common/rte_service.c           | 115 ++++---
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 385 ++++++++++++++++++++++
 lib/eal/version.map                    |   2 +
 lib/eal/x86/rte_power_intrinsics.c     |  17 +-
 lib/power/rte_power_pmd_mgmt.c         |  34 +-
 15 files changed, 1029 insertions(+), 87 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
@ 2024-09-11 17:04                             ` Mattias Rönnblom
  2024-09-12  2:33                               ` fengchengwen
                                                 ` (2 more replies)
  2024-09-11 17:04                             ` [PATCH v2 2/6] eal: add lcore variable test suite Mattias Rönnblom
                                               ` (4 subsequent siblings)
  5 siblings, 3 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 17:04 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v2:
 * Add Windows support. (Morten Brørup)
 * Fix lcore variables API index reference. (Morten Brørup)
 * Various improvements of the API documentation. (Morten Brørup)
 * Elimination of unused symbol in version.map. (Morten Brørup)

PATCH:
 * Update MAINTAINERS and release notes.
 * Stop covering included files in extern "C" {}.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.
---
 MAINTAINERS                            |   6 +
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  78 +++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 385 +++++++++++++++++++++++++
 lib/eal/version.map                    |   2 +
 9 files changed, 489 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c5a703b5c0..362d9a3f28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
 F: lib/eal/common/rte_random.c
 F: app/test/test_rand_perf.c
 
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
 ARM v7
 M: Wathsala Vithanage <wathsala.vithanage@arm.com>
 F: config/arm/
diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f9f0300126..ed577f14ee 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore variables](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 0ff70d9057..a3884f7491 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -55,6 +55,20 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added EAL per-lcore static memory allocation facility.**
+
+    Added EAL API <rte_lcore_var.h> for statically allocating small,
+    frequently-accessed data structures, for which one instance should
+    exist for each EAL thread and registered non-EAL thread.
+
+    With lcore variables, data is organized spatially on a per-lcore id
+    basis, rather than per library or PMD, avoiding the need for cache
+    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+    reduces CPU cache internal fragmentation, improving performance.
+
+    Lcore variables are similar to thread-local storage (TLS, e.g.,
+    C11 _Thread_local), but decoupling the values' life time from that
+    of the threads.
 
 Removed Items
 -------------
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..309822039b
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+#include <malloc.h>
+#endif
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+#ifdef RTE_EXEC_ENV_WINDOWS
+		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
+					       RTE_CACHE_LINE_SIZE);
+#else
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+#endif
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..ec3ab714a8
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Lcore variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * instance for each current and future lcore id-equipped thread, with
+ * a total of RTE_MAX_LCORE instances. The value of an lcore variable
+ * for a particular lcore id is independent from other values (for
+ * other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handle type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define an lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by two different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of an lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always be the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this requires
+ * sizing data structures (e.g., using `__rte_cache_aligned`) to an
+ * even number of cache lines to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables have the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which makes use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must be initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define an lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handle, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable is only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handle pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for an lcore variable.
+ *
+ * @param value
+ *   A pointer successively set to point to lcore variable value
+ *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for an lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The variable's handle, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index e3ff412683..0c80bf7331 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,8 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v2 2/6] eal: add lcore variable test suite
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-11 17:04                             ` Mattias Rönnblom
  2024-09-12  7:35                               ` Jerin Jacob
  2024-09-11 17:04                             ` [PATCH v2 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
                                               ` (3 subsequent siblings)
  5 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 17:04 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Add test suite to exercise the <rte_lcore_var.h> API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 433 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..48279522f0 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..e07d13460f
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,432 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v2 3/6] random: keep PRNG state in lcore variable
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 2/6] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-09-11 17:04                             ` Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 4/6] power: keep per-lcore " Mattias Rönnblom
                                               ` (2 subsequent siblings)
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 17:04 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v2 4/6] power: keep per-lcore state in lcore variable
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
                                               ` (2 preceding siblings ...)
  2024-09-11 17:04                             ` [PATCH v2 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-11 17:04                             ` Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 5/6] service: " Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 17:04 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Replace for loop with FOREACH macro.
---
 lib/power/rte_power_pmd_mgmt.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a5139dd4f7 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v2 5/6] service: keep per-lcore state in lcore variable
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
                                               ` (3 preceding siblings ...)
  2024-09-11 17:04                             ` [PATCH v2 4/6] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-11 17:04                             ` Mattias Rönnblom
  2024-09-11 17:04                             ` [PATCH v2 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 17:04 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 115 +++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..03379f1588 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +449,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +462,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +484,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +530,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +546,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +567,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +584,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +636,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +688,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +706,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +731,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +755,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +779,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +809,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +818,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +879,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +895,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +942,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +971,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +983,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1022,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v2 6/6] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
                                               ` (4 preceding siblings ...)
  2024-09-11 17:04                             ` [PATCH v2 5/6] service: " Mattias Rönnblom
@ 2024-09-11 17:04                             ` Mattias Rönnblom
  5 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-11 17:04 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-11 17:04                             ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-12  2:33                               ` fengchengwen
  2024-09-12  5:35                                 ` Mattias Rönnblom
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
  2024-09-12  9:10                               ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Morten Brørup
  2 siblings, 1 reply; 185+ messages in thread
From: fengchengwen @ 2024-09-12  2:33 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand

On 2024/9/12 1:04, Mattias Rönnblom wrote:
> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small, frequently-accessed data structures, for which one instance
> should exist for each lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> --
> 
> PATCH v2:
>  * Add Windows support. (Morten Brørup)
>  * Fix lcore variables API index reference. (Morten Brørup)
>  * Various improvements of the API documentation. (Morten Brørup)
>  * Elimination of unused symbol in version.map. (Morten Brørup)

these history could move to the cover letter.

> 
> PATCH:
>  * Update MAINTAINERS and release notes.
>  * Stop covering included files in extern "C" {}.
> 
> RFC v6:
>  * Include <stdlib.h> to get aligned_alloc().
>  * Tweak documentation (grammar).
>  * Provide API-level guarantees that lcore variable values take on an
>    initial value of zero.
>  * Fix misplaced __rte_cache_aligned in the API doc example.
> 
> RFC v5:
>  * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
>  * The RTE_LCORE_VAR_GET() and SET() convience access macros
>    covered an uncommon use case, where the lcore value is of a
>    primitive type, rather than a struct, and is thus eliminated
>    from the API. (Morten Brørup)
>  * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
>    RTE_LCORE_VAR_VALUE().
>  * The underscores are removed from __rte_lcore_var_lcore_ptr() to
>    signal that this function is a part of the public API.
>  * Macro arguments are documented.
> 
> RFV v4:
>  * Replace large static array with libc heap-allocated memory. One
>    implication of this change is there no longer exists a fixed upper
>    bound for the total amount of memory used by lcore variables.
>    RTE_MAX_LCORE_VAR has changed meaning, and now represent the
>    maximum size of any individual lcore variable value.
>  * Fix issues in example. (Morten Brørup)
>  * Improve access macro type checking. (Morten Brørup)
>  * Refer to the lcore variable handle as "handle" and not "name" in
>    various macros.
>  * Document lack of thread safety in rte_lcore_var_alloc().
>  * Provide API-level assurance the lcore variable handle is
>    always non-NULL, to all applications to use NULL to mean
>    "not yet allocated".
>  * Note zero-sized allocations are not allowed.
>  * Give API-level guarantee the lcore variable values are zeroed.
> 
> RFC v3:
>  * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>  * Update example to reflect FOREACH macro name change (in RFC v2).
> 
> RFC v2:
>  * Use alignof to derive alignment requirements. (Morten Brørup)
>  * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>    *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>  * Allow user-specified alignment, but limit max to cache line size.
> ---
>  MAINTAINERS                            |   6 +
>  config/rte_config.h                    |   1 +
>  doc/api/doxy-api-index.md              |   1 +
>  doc/guides/rel_notes/release_24_11.rst |  14 +
>  lib/eal/common/eal_common_lcore_var.c  |  78 +++++
>  lib/eal/common/meson.build             |   1 +
>  lib/eal/include/meson.build            |   1 +
>  lib/eal/include/rte_lcore_var.h        | 385 +++++++++++++++++++++++++
>  lib/eal/version.map                    |   2 +
>  9 files changed, 489 insertions(+)
>  create mode 100644 lib/eal/common/eal_common_lcore_var.c
>  create mode 100644 lib/eal/include/rte_lcore_var.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c5a703b5c0..362d9a3f28 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
>  F: lib/eal/common/rte_random.c
>  F: app/test/test_rand_perf.c
>  
> +Lcore Variables
> +M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> +F: lib/eal/include/rte_lcore_var.h
> +F: lib/eal/common/eal_common_lcore_var.c
> +F: app/test/test_lcore_var.c
> +
>  ARM v7
>  M: Wathsala Vithanage <wathsala.vithanage@arm.com>
>  F: config/arm/
> diff --git a/config/rte_config.h b/config/rte_config.h
> index dd7bb0d35b..311692e498 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -41,6 +41,7 @@
>  /* EAL defines */
>  #define RTE_CACHE_GUARD_LINES 1
>  #define RTE_MAX_HEAPS 32
> +#define RTE_MAX_LCORE_VAR 1048576
>  #define RTE_MAX_MEMSEG_LISTS 128
>  #define RTE_MAX_MEMSEG_PER_LIST 8192
>  #define RTE_MAX_MEM_MB_PER_LIST 32768
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index f9f0300126..ed577f14ee 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>    [interrupts](@ref rte_interrupts.h),
>    [launch](@ref rte_launch.h),
>    [lcore](@ref rte_lcore.h),
> +  [lcore variables](@ref rte_lcore_var.h),
>    [per-lcore](@ref rte_per_lcore.h),
>    [service cores](@ref rte_service.h),
>    [keepalive](@ref rte_keepalive.h),
> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
> index 0ff70d9057..a3884f7491 100644
> --- a/doc/guides/rel_notes/release_24_11.rst
> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -55,6 +55,20 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
>  
> +* **Added EAL per-lcore static memory allocation facility.**
> +
> +    Added EAL API <rte_lcore_var.h> for statically allocating small,
> +    frequently-accessed data structures, for which one instance should
> +    exist for each EAL thread and registered non-EAL thread.
> +
> +    With lcore variables, data is organized spatially on a per-lcore id
> +    basis, rather than per library or PMD, avoiding the need for cache
> +    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
> +    reduces CPU cache internal fragmentation, improving performance.
> +
> +    Lcore variables are similar to thread-local storage (TLS, e.g.,
> +    C11 _Thread_local), but decoupling the values' life time from that
> +    of the threads.
>  
>  Removed Items
>  -------------
> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
> new file mode 100644
> index 0000000000..309822039b
> --- /dev/null
> +++ b/lib/eal/common/eal_common_lcore_var.c
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#include <inttypes.h>
> +#include <stdlib.h>
> +
> +#ifdef RTE_EXEC_ENV_WINDOWS
> +#include <malloc.h>
> +#endif
> +
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +
> +#include <rte_lcore_var.h>
> +
> +#include "eal_private.h"
> +
> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> +
> +static void *lcore_buffer;
> +static size_t offset = RTE_MAX_LCORE_VAR;
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +	void *handle;
> +	void *value;
> +
> +	offset = RTE_ALIGN_CEIL(offset, align);
> +
> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> +#ifdef RTE_EXEC_ENV_WINDOWS
> +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> +					       RTE_CACHE_LINE_SIZE);
> +#else
> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> +					     LCORE_BUFFER_SIZE);
> +#endif
> +		RTE_VERIFY(lcore_buffer != NULL);
> +
> +		offset = 0;
> +	}
> +
> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> +
> +	offset += size;
> +
> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> +		memset(value, 0, size);
> +
> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> +		"%"PRIuPTR"-byte alignment", size, align);

Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
But it will introduce following problem:
1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.
2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.

...


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-12  2:33                               ` fengchengwen
@ 2024-09-12  5:35                                 ` Mattias Rönnblom
  2024-09-12  7:05                                   ` fengchengwen
  2024-09-12  7:28                                   ` Jerin Jacob
  0 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  5:35 UTC (permalink / raw)
  To: fengchengwen, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand

On 2024-09-12 04:33, fengchengwen wrote:
> On 2024/9/12 1:04, Mattias Rönnblom wrote:
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small, frequently-accessed data structures, for which one instance
>> should exist for each lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>
>> --
>>
>> PATCH v2:
>>   * Add Windows support. (Morten Brørup)
>>   * Fix lcore variables API index reference. (Morten Brørup)
>>   * Various improvements of the API documentation. (Morten Brørup)
>>   * Elimination of unused symbol in version.map. (Morten Brørup)
> 
> these history could move to the cover letter.
> 
>>
>> PATCH:
>>   * Update MAINTAINERS and release notes.
>>   * Stop covering included files in extern "C" {}.
>>
>> RFC v6:
>>   * Include <stdlib.h> to get aligned_alloc().
>>   * Tweak documentation (grammar).
>>   * Provide API-level guarantees that lcore variable values take on an
>>     initial value of zero.
>>   * Fix misplaced __rte_cache_aligned in the API doc example.
>>
>> RFC v5:
>>   * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
>>   * The RTE_LCORE_VAR_GET() and SET() convience access macros
>>     covered an uncommon use case, where the lcore value is of a
>>     primitive type, rather than a struct, and is thus eliminated
>>     from the API. (Morten Brørup)
>>   * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
>>     RTE_LCORE_VAR_VALUE().
>>   * The underscores are removed from __rte_lcore_var_lcore_ptr() to
>>     signal that this function is a part of the public API.
>>   * Macro arguments are documented.
>>
>> RFV v4:
>>   * Replace large static array with libc heap-allocated memory. One
>>     implication of this change is there no longer exists a fixed upper
>>     bound for the total amount of memory used by lcore variables.
>>     RTE_MAX_LCORE_VAR has changed meaning, and now represent the
>>     maximum size of any individual lcore variable value.
>>   * Fix issues in example. (Morten Brørup)
>>   * Improve access macro type checking. (Morten Brørup)
>>   * Refer to the lcore variable handle as "handle" and not "name" in
>>     various macros.
>>   * Document lack of thread safety in rte_lcore_var_alloc().
>>   * Provide API-level assurance the lcore variable handle is
>>     always non-NULL, to all applications to use NULL to mean
>>     "not yet allocated".
>>   * Note zero-sized allocations are not allowed.
>>   * Give API-level guarantee the lcore variable values are zeroed.
>>
>> RFC v3:
>>   * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>>   * Update example to reflect FOREACH macro name change (in RFC v2).
>>
>> RFC v2:
>>   * Use alignof to derive alignment requirements. (Morten Brørup)
>>   * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>>     *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>>   * Allow user-specified alignment, but limit max to cache line size.
>> ---
>>   MAINTAINERS                            |   6 +
>>   config/rte_config.h                    |   1 +
>>   doc/api/doxy-api-index.md              |   1 +
>>   doc/guides/rel_notes/release_24_11.rst |  14 +
>>   lib/eal/common/eal_common_lcore_var.c  |  78 +++++
>>   lib/eal/common/meson.build             |   1 +
>>   lib/eal/include/meson.build            |   1 +
>>   lib/eal/include/rte_lcore_var.h        | 385 +++++++++++++++++++++++++
>>   lib/eal/version.map                    |   2 +
>>   9 files changed, 489 insertions(+)
>>   create mode 100644 lib/eal/common/eal_common_lcore_var.c
>>   create mode 100644 lib/eal/include/rte_lcore_var.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index c5a703b5c0..362d9a3f28 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
>>   F: lib/eal/common/rte_random.c
>>   F: app/test/test_rand_perf.c
>>   
>> +Lcore Variables
>> +M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> +F: lib/eal/include/rte_lcore_var.h
>> +F: lib/eal/common/eal_common_lcore_var.c
>> +F: app/test/test_lcore_var.c
>> +
>>   ARM v7
>>   M: Wathsala Vithanage <wathsala.vithanage@arm.com>
>>   F: config/arm/
>> diff --git a/config/rte_config.h b/config/rte_config.h
>> index dd7bb0d35b..311692e498 100644
>> --- a/config/rte_config.h
>> +++ b/config/rte_config.h
>> @@ -41,6 +41,7 @@
>>   /* EAL defines */
>>   #define RTE_CACHE_GUARD_LINES 1
>>   #define RTE_MAX_HEAPS 32
>> +#define RTE_MAX_LCORE_VAR 1048576
>>   #define RTE_MAX_MEMSEG_LISTS 128
>>   #define RTE_MAX_MEMSEG_PER_LIST 8192
>>   #define RTE_MAX_MEM_MB_PER_LIST 32768
>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>> index f9f0300126..ed577f14ee 100644
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>>     [interrupts](@ref rte_interrupts.h),
>>     [launch](@ref rte_launch.h),
>>     [lcore](@ref rte_lcore.h),
>> +  [lcore variables](@ref rte_lcore_var.h),
>>     [per-lcore](@ref rte_per_lcore.h),
>>     [service cores](@ref rte_service.h),
>>     [keepalive](@ref rte_keepalive.h),
>> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
>> index 0ff70d9057..a3884f7491 100644
>> --- a/doc/guides/rel_notes/release_24_11.rst
>> +++ b/doc/guides/rel_notes/release_24_11.rst
>> @@ -55,6 +55,20 @@ New Features
>>        Also, make sure to start the actual text at the margin.
>>        =======================================================
>>   
>> +* **Added EAL per-lcore static memory allocation facility.**
>> +
>> +    Added EAL API <rte_lcore_var.h> for statically allocating small,
>> +    frequently-accessed data structures, for which one instance should
>> +    exist for each EAL thread and registered non-EAL thread.
>> +
>> +    With lcore variables, data is organized spatially on a per-lcore id
>> +    basis, rather than per library or PMD, avoiding the need for cache
>> +    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
>> +    reduces CPU cache internal fragmentation, improving performance.
>> +
>> +    Lcore variables are similar to thread-local storage (TLS, e.g.,
>> +    C11 _Thread_local), but decoupling the values' life time from that
>> +    of the threads.
>>   
>>   Removed Items
>>   -------------
>> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
>> new file mode 100644
>> index 0000000000..309822039b
>> --- /dev/null
>> +++ b/lib/eal/common/eal_common_lcore_var.c
>> @@ -0,0 +1,78 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#include <inttypes.h>
>> +#include <stdlib.h>
>> +
>> +#ifdef RTE_EXEC_ENV_WINDOWS
>> +#include <malloc.h>
>> +#endif
>> +
>> +#include <rte_common.h>
>> +#include <rte_debug.h>
>> +#include <rte_log.h>
>> +
>> +#include <rte_lcore_var.h>
>> +
>> +#include "eal_private.h"
>> +
>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>> +
>> +static void *lcore_buffer;
>> +static size_t offset = RTE_MAX_LCORE_VAR;
>> +
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	void *handle;
>> +	void *value;
>> +
>> +	offset = RTE_ALIGN_CEIL(offset, align);
>> +
>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
>> +#ifdef RTE_EXEC_ENV_WINDOWS
>> +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
>> +					       RTE_CACHE_LINE_SIZE);
>> +#else
>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>> +					     LCORE_BUFFER_SIZE);
>> +#endif
>> +		RTE_VERIFY(lcore_buffer != NULL);
>> +
>> +		offset = 0;
>> +	}
>> +
>> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
>> +
>> +	offset += size;
>> +
>> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
>> +		memset(value, 0, size);
>> +
>> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>> +		"%"PRIuPTR"-byte alignment", size, align);
> 
> Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
> But it will introduce following problem:
> 1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.

This mechanism is for small allocations, which the sum of is also 
expected to be small (although the system won't break if they aren't).

If you have large allocations, you are better off using lazy huge page 
allocations further down the initialization process. Otherwise, you will 
end up using memory for RTE_MAX_LCORE instances, rather than the actual 
lcore count, which could be substantially smaller.

But sure, everything else being equal, you could have used huge pages 
for these lcore variable values. But everything isn't equal.

> 2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.
> 
> ...
> 

Not sure I think that's a downside. Further cementing that anti-pattern 
into DPDK seems to be a bad idea to me.

lcore variables doesn't *introduce* any of these issues, since the 
mechanisms it's replacing also have these shortcomings (if you think 
about them as such - I'm not sure I do).

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-12  5:35                                 ` Mattias Rönnblom
@ 2024-09-12  7:05                                   ` fengchengwen
  2024-09-12  7:28                                   ` Jerin Jacob
  1 sibling, 0 replies; 185+ messages in thread
From: fengchengwen @ 2024-09-12  7:05 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand

On 2024/9/12 13:35, Mattias Rönnblom wrote:
> On 2024-09-12 04:33, fengchengwen wrote:
>> On 2024/9/12 1:04, Mattias Rönnblom wrote:
>>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>>
>>> An lcore variable has one value for every current and future lcore
>>> id-equipped thread.
>>>
>>> The primary <rte_lcore_var.h> use case is for statically allocating
>>> small, frequently-accessed data structures, for which one instance
>>> should exist for each lcore.
>>>
>>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>>> _Thread_local), but decoupling the values' life time with that of the
>>> threads.
>>>
>>> Lcore variables are also similar in terms of functionality provided by
>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>>> build-time machinery. DPCPU uses linker scripts, which effectively
>>> prevents the reuse of its, otherwise seemingly viable, approach.
>>>
>>> The currently-prevailing way to solve the same problem as lcore
>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>>> lcore variables over this approach is that data related to the same
>>> lcore now is close (spatially, in memory), rather than data used by
>>> the same module, which in turn avoid excessive use of padding,
>>> polluting caches with unused data.
>>>
>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>>
>>> -- 
>>>
>>> PATCH v2:
>>>   * Add Windows support. (Morten Brørup)
>>>   * Fix lcore variables API index reference. (Morten Brørup)
>>>   * Various improvements of the API documentation. (Morten Brørup)
>>>   * Elimination of unused symbol in version.map. (Morten Brørup)
>>
>> these history could move to the cover letter.
>>
>>>
>>> PATCH:
>>>   * Update MAINTAINERS and release notes.
>>>   * Stop covering included files in extern "C" {}.
>>>
>>> RFC v6:
>>>   * Include <stdlib.h> to get aligned_alloc().
>>>   * Tweak documentation (grammar).
>>>   * Provide API-level guarantees that lcore variable values take on an
>>>     initial value of zero.
>>>   * Fix misplaced __rte_cache_aligned in the API doc example.
>>>
>>> RFC v5:
>>>   * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
>>>   * The RTE_LCORE_VAR_GET() and SET() convience access macros
>>>     covered an uncommon use case, where the lcore value is of a
>>>     primitive type, rather than a struct, and is thus eliminated
>>>     from the API. (Morten Brørup)
>>>   * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
>>>     RTE_LCORE_VAR_VALUE().
>>>   * The underscores are removed from __rte_lcore_var_lcore_ptr() to
>>>     signal that this function is a part of the public API.
>>>   * Macro arguments are documented.
>>>
>>> RFV v4:
>>>   * Replace large static array with libc heap-allocated memory. One
>>>     implication of this change is there no longer exists a fixed upper
>>>     bound for the total amount of memory used by lcore variables.
>>>     RTE_MAX_LCORE_VAR has changed meaning, and now represent the
>>>     maximum size of any individual lcore variable value.
>>>   * Fix issues in example. (Morten Brørup)
>>>   * Improve access macro type checking. (Morten Brørup)
>>>   * Refer to the lcore variable handle as "handle" and not "name" in
>>>     various macros.
>>>   * Document lack of thread safety in rte_lcore_var_alloc().
>>>   * Provide API-level assurance the lcore variable handle is
>>>     always non-NULL, to all applications to use NULL to mean
>>>     "not yet allocated".
>>>   * Note zero-sized allocations are not allowed.
>>>   * Give API-level guarantee the lcore variable values are zeroed.
>>>
>>> RFC v3:
>>>   * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>>>   * Update example to reflect FOREACH macro name change (in RFC v2).
>>>
>>> RFC v2:
>>>   * Use alignof to derive alignment requirements. (Morten Brørup)
>>>   * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>>>     *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>>>   * Allow user-specified alignment, but limit max to cache line size.
>>> ---
>>>   MAINTAINERS                            |   6 +
>>>   config/rte_config.h                    |   1 +
>>>   doc/api/doxy-api-index.md              |   1 +
>>>   doc/guides/rel_notes/release_24_11.rst |  14 +
>>>   lib/eal/common/eal_common_lcore_var.c  |  78 +++++
>>>   lib/eal/common/meson.build             |   1 +
>>>   lib/eal/include/meson.build            |   1 +
>>>   lib/eal/include/rte_lcore_var.h        | 385 +++++++++++++++++++++++++
>>>   lib/eal/version.map                    |   2 +
>>>   9 files changed, 489 insertions(+)
>>>   create mode 100644 lib/eal/common/eal_common_lcore_var.c
>>>   create mode 100644 lib/eal/include/rte_lcore_var.h
>>>
>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>> index c5a703b5c0..362d9a3f28 100644
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
>>>   F: lib/eal/common/rte_random.c
>>>   F: app/test/test_rand_perf.c
>>>   +Lcore Variables
>>> +M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>> +F: lib/eal/include/rte_lcore_var.h
>>> +F: lib/eal/common/eal_common_lcore_var.c
>>> +F: app/test/test_lcore_var.c
>>> +
>>>   ARM v7
>>>   M: Wathsala Vithanage <wathsala.vithanage@arm.com>
>>>   F: config/arm/
>>> diff --git a/config/rte_config.h b/config/rte_config.h
>>> index dd7bb0d35b..311692e498 100644
>>> --- a/config/rte_config.h
>>> +++ b/config/rte_config.h
>>> @@ -41,6 +41,7 @@
>>>   /* EAL defines */
>>>   #define RTE_CACHE_GUARD_LINES 1
>>>   #define RTE_MAX_HEAPS 32
>>> +#define RTE_MAX_LCORE_VAR 1048576
>>>   #define RTE_MAX_MEMSEG_LISTS 128
>>>   #define RTE_MAX_MEMSEG_PER_LIST 8192
>>>   #define RTE_MAX_MEM_MB_PER_LIST 32768
>>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>>> index f9f0300126..ed577f14ee 100644
>>> --- a/doc/api/doxy-api-index.md
>>> +++ b/doc/api/doxy-api-index.md
>>> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>>>     [interrupts](@ref rte_interrupts.h),
>>>     [launch](@ref rte_launch.h),
>>>     [lcore](@ref rte_lcore.h),
>>> +  [lcore variables](@ref rte_lcore_var.h),
>>>     [per-lcore](@ref rte_per_lcore.h),
>>>     [service cores](@ref rte_service.h),
>>>     [keepalive](@ref rte_keepalive.h),
>>> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
>>> index 0ff70d9057..a3884f7491 100644
>>> --- a/doc/guides/rel_notes/release_24_11.rst
>>> +++ b/doc/guides/rel_notes/release_24_11.rst
>>> @@ -55,6 +55,20 @@ New Features
>>>        Also, make sure to start the actual text at the margin.
>>>        =======================================================
>>>   +* **Added EAL per-lcore static memory allocation facility.**
>>> +
>>> +    Added EAL API <rte_lcore_var.h> for statically allocating small,
>>> +    frequently-accessed data structures, for which one instance should
>>> +    exist for each EAL thread and registered non-EAL thread.
>>> +
>>> +    With lcore variables, data is organized spatially on a per-lcore id
>>> +    basis, rather than per library or PMD, avoiding the need for cache
>>> +    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
>>> +    reduces CPU cache internal fragmentation, improving performance.
>>> +
>>> +    Lcore variables are similar to thread-local storage (TLS, e.g.,
>>> +    C11 _Thread_local), but decoupling the values' life time from that
>>> +    of the threads.
>>>     Removed Items
>>>   -------------
>>> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
>>> new file mode 100644
>>> index 0000000000..309822039b
>>> --- /dev/null
>>> +++ b/lib/eal/common/eal_common_lcore_var.c
>>> @@ -0,0 +1,78 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2024 Ericsson AB
>>> + */
>>> +
>>> +#include <inttypes.h>
>>> +#include <stdlib.h>
>>> +
>>> +#ifdef RTE_EXEC_ENV_WINDOWS
>>> +#include <malloc.h>
>>> +#endif
>>> +
>>> +#include <rte_common.h>
>>> +#include <rte_debug.h>
>>> +#include <rte_log.h>
>>> +
>>> +#include <rte_lcore_var.h>
>>> +
>>> +#include "eal_private.h"
>>> +
>>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>>> +
>>> +static void *lcore_buffer;
>>> +static size_t offset = RTE_MAX_LCORE_VAR;
>>> +
>>> +static void *
>>> +lcore_var_alloc(size_t size, size_t align)
>>> +{
>>> +    void *handle;
>>> +    void *value;
>>> +
>>> +    offset = RTE_ALIGN_CEIL(offset, align);
>>> +
>>> +    if (offset + size > RTE_MAX_LCORE_VAR) {
>>> +#ifdef RTE_EXEC_ENV_WINDOWS
>>> +        lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
>>> +                           RTE_CACHE_LINE_SIZE);
>>> +#else
>>> +        lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>>> +                         LCORE_BUFFER_SIZE);
>>> +#endif
>>> +        RTE_VERIFY(lcore_buffer != NULL);
>>> +
>>> +        offset = 0;
>>> +    }
>>> +
>>> +    handle = RTE_PTR_ADD(lcore_buffer, offset);
>>> +
>>> +    offset += size;
>>> +
>>> +    RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
>>> +        memset(value, 0, size);
>>> +
>>> +    EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>>> +        "%"PRIuPTR"-byte alignment", size, align);
>>
>> Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
>> But it will introduce following problem:
>> 1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.
> 
> This mechanism is for small allocations, which the sum of is also expected to be small (although the system won't break if they aren't).
> 
> If you have large allocations, you are better off using lazy huge page allocations further down the initialization process. Otherwise, you will end up using memory for RTE_MAX_LCORE instances, rather than the actual lcore count, which could be substantially smaller.

Yes, it may cost two much memory if allocated from hugepage memory.

> 
> But sure, everything else being equal, you could have used huge pages for these lcore variable values. But everything isn't equal.
> 
>> 2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.
>>
>> ...
>>
> 
> Not sure I think that's a downside. Further cementing that anti-pattern into DPDK seems to be a bad idea to me.
> 
> lcore variables doesn't *introduce* any of these issues, since the mechanisms it's replacing also have these shortcomings (if you think about them as such - I'm not sure I do).

Got it.

This feature is a enhanced for current lcore variables, which bring together scattered data from the point view of a single core.
and current it seemmed hard to extend support hugepage memory.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-12  5:35                                 ` Mattias Rönnblom
  2024-09-12  7:05                                   ` fengchengwen
@ 2024-09-12  7:28                                   ` Jerin Jacob
  1 sibling, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2024-09-12  7:28 UTC (permalink / raw)
  To: Mattias Rönnblom, Anatoly Burakov
  Cc: fengchengwen, Mattias Rönnblom, dev, Morten Brørup,
	Stephen Hemminger, Konstantin Ananyev, David Marchand

On Thu, Sep 12, 2024 at 11:05 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2024-09-12 04:33, fengchengwen wrote:
> > On 2024/9/12 1:04, Mattias Rönnblom wrote:
> >> Introduce DPDK per-lcore id variables, or lcore variables for short.
> >>
> >> An lcore variable has one value for every current and future lcore
> >> id-equipped thread.
> >>
> >> The primary <rte_lcore_var.h> use case is for statically allocating
> >> small, frequently-accessed data structures, for which one instance
> >> should exist for each lcore.
> >>
> >> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> >> _Thread_local), but decoupling the values' life time with that of the
> >> threads.
> >>
> >> Lcore variables are also similar in terms of functionality provided by
> >> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >> build-time machinery. DPCPU uses linker scripts, which effectively
> >> prevents the reuse of its, otherwise seemingly viable, approach.
> >>
> >> The currently-prevailing way to solve the same problem as lcore
> >> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> >> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >> lcore variables over this approach is that data related to the same
> >> lcore now is close (spatially, in memory), rather than data used by
> >> the same module, which in turn avoid excessive use of padding,
> >> polluting caches with unused data.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >>
> >> --
> >>

> >> +
> >> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> >> +
> >> +static void *lcore_buffer;
> >> +static size_t offset = RTE_MAX_LCORE_VAR;
> >> +
> >> +static void *
> >> +lcore_var_alloc(size_t size, size_t align)
> >> +{
> >> +    void *handle;
> >> +    void *value;
> >> +
> >> +    offset = RTE_ALIGN_CEIL(offset, align);
> >> +
> >> +    if (offset + size > RTE_MAX_LCORE_VAR) {
> >> +#ifdef RTE_EXEC_ENV_WINDOWS
> >> +            lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> >> +                                           RTE_CACHE_LINE_SIZE);
> >> +#else
> >> +            lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >> +                                         LCORE_BUFFER_SIZE);
> >> +#endif
> >> +            RTE_VERIFY(lcore_buffer != NULL);
> >> +
> >> +            offset = 0;
> >> +    }
> >> +
> >> +    handle = RTE_PTR_ADD(lcore_buffer, offset);
> >> +
> >> +    offset += size;
> >> +
> >> +    RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> >> +            memset(value, 0, size);
> >> +
> >> +    EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> >> +            "%"PRIuPTR"-byte alignment", size, align);
> >
> > Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
> > But it will introduce following problem:
> > 1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.
>
> This mechanism is for small allocations, which the sum of is also
> expected to be small (although the system won't break if they aren't).
>
> If you have large allocations, you are better off using lazy huge page
> allocations further down the initialization process. Otherwise, you will
> end up using memory for RTE_MAX_LCORE instances, rather than the actual
> lcore count, which could be substantially smaller.

+ @Anatoly Burakov

If I am not wrong, DPDK huge page memory allocator (rte_malloc()), may
have similar overhead glibc once. Meaning, The hugepage allocated only
when needed and space is over.
if so, why not use rte_malloc() if available.



>
> But sure, everything else being equal, you could have used huge pages
> for these lcore variable values. But everything isn't equal.
>
> > 2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.
> >
> > ...
> >
>
> Not sure I think that's a downside. Further cementing that anti-pattern
> into DPDK seems to be a bad idea to me.
>
> lcore variables doesn't *introduce* any of these issues, since the
> mechanisms it's replacing also have these shortcomings (if you think
> about them as such - I'm not sure I do).

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 2/6] eal: add lcore variable test suite
  2024-09-11 17:04                             ` [PATCH v2 2/6] eal: add lcore variable test suite Mattias Rönnblom
@ 2024-09-12  7:35                               ` Jerin Jacob
  2024-09-12  8:56                                 ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Jerin Jacob @ 2024-09-12  7:35 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand

On Wed, Sep 11, 2024 at 11:08 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Add test suite to exercise the <rte_lcore_var.h> API.
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>
> --
>
> RFC v5:
>  * Adapt tests to reflect the removal of the GET() and SET() macros.
>
> RFC v4:
>  * Check all lcore id's values for all variables in the many variables
>    test case.
>  * Introduce test case for max-sized lcore variables.
>
> RFC v2:
>  * Improve alignment-related test coverage.
> ---
>  app/test/meson.build      |   1 +
>  app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 433 insertions(+)
>  create mode 100644 app/test/test_lcore_var.c
>
> diff --git a/app/test/meson.build b/app/test/meson.build
> index e29258e6ec..48279522f0 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -103,6 +103,7 @@ source_file_deps = {
>      'test_ipsec_sad.c': ['ipsec'],
>      'test_kvargs.c': ['kvargs'],
>      'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
> +    'test_lcore_var.c': [],
>      'test_lcores.c': [],
>      'test_link_bonding.c': ['ethdev', 'net_bond',
> +}
> +
> +REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);

IMO, Good to add one perf test suite for the operations like other
library calls. It may be compared with TLS on same operation.
So that end users can decide to use the scheme based on their use
case, and we get performance test case to avoid future regression
for this library.

It may not show any difference in numbers, but once we have self
monitoring performance counters[1] it can in the future.
[1[]
https://patches.dpdk.org/project/dpdk/patch/20230201131757.1787527-1-tduszynski@marvell.com/




> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 0/7] Lcore variables
  2024-09-11 17:04                             ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-12  2:33                               ` fengchengwen
@ 2024-09-12  8:44                               ` Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                                   ` (6 more replies)
  2024-09-12  9:10                               ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Morten Brørup
  2 siblings, 7 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

This patch set introduces a new API <rte_lcore_var.h> for static
per-lcore id data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

Mattias Rönnblom (7):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable functional tests
  eal: add lcore variable performance test
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 MAINTAINERS                            |   6 +
 app/test/meson.build                   |   2 +
 app/test/test_lcore_var.c              | 432 +++++++++++++++++++++++++
 app/test/test_lcore_var_perf.c         | 160 +++++++++
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  78 +++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/common/rte_random.c            |  28 +-
 lib/eal/common/rte_service.c           | 115 ++++---
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 385 ++++++++++++++++++++++
 lib/eal/version.map                    |   2 +
 lib/eal/x86/rte_power_intrinsics.c     |  17 +-
 lib/power/rte_power_pmd_mgmt.c         |  34 +-
 16 files changed, 1190 insertions(+), 87 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 app/test/test_lcore_var_perf.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 1/7] eal: add static per-lcore memory allocation facility
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
@ 2024-09-12  8:44                                 ` Mattias Rönnblom
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 2/7] eal: add lcore variable functional tests Mattias Rönnblom
                                                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v2:
 * Add Windows support. (Morten Brørup)
 * Fix lcore variables API index reference. (Morten Brørup)
 * Various improvements of the API documentation. (Morten Brørup)
 * Elimination of unused symbol in version.map. (Morten Brørup)

PATCH:
 * Update MAINTAINERS and release notes.
 * Stop covering included files in extern "C" {}.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.
---
 MAINTAINERS                            |   6 +
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  78 +++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 385 +++++++++++++++++++++++++
 lib/eal/version.map                    |   2 +
 9 files changed, 489 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c5a703b5c0..362d9a3f28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
 F: lib/eal/common/rte_random.c
 F: app/test/test_rand_perf.c
 
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
 ARM v7
 M: Wathsala Vithanage <wathsala.vithanage@arm.com>
 F: config/arm/
diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f9f0300126..ed577f14ee 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore variables](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 0ff70d9057..a3884f7491 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -55,6 +55,20 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added EAL per-lcore static memory allocation facility.**
+
+    Added EAL API <rte_lcore_var.h> for statically allocating small,
+    frequently-accessed data structures, for which one instance should
+    exist for each EAL thread and registered non-EAL thread.
+
+    With lcore variables, data is organized spatially on a per-lcore id
+    basis, rather than per library or PMD, avoiding the need for cache
+    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+    reduces CPU cache internal fragmentation, improving performance.
+
+    Lcore variables are similar to thread-local storage (TLS, e.g.,
+    C11 _Thread_local), but decoupling the values' life time from that
+    of the threads.
 
 Removed Items
 -------------
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..309822039b
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+#include <malloc.h>
+#endif
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+#ifdef RTE_EXEC_ENV_WINDOWS
+		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
+					       RTE_CACHE_LINE_SIZE);
+#else
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+#endif
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..ec3ab714a8
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Lcore variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * instance for each current and future lcore id-equipped thread, with
+ * a total of RTE_MAX_LCORE instances. The value of an lcore variable
+ * for a particular lcore id is independent from other values (for
+ * other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handle type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define an lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by two different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of an lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always be the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this requires
+ * sizing data structures (e.g., using `__rte_cache_aligned`) to an
+ * even number of cache lines to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables have the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which makes use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must be initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define an lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handle, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable is only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handle pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for an lcore variable.
+ *
+ * @param value
+ *   A pointer successively set to point to lcore variable value
+ *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for an lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The variable's handle, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index e3ff412683..0c80bf7331 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,8 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 2/7] eal: add lcore variable functional tests
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-12  8:44                                 ` Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 3/7] eal: add lcore variable performance test Mattias Rönnblom
                                                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add functional test suite to exercise the <rte_lcore_var.h> API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 433 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..48279522f0 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..e07d13460f
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,432 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 2/7] eal: add lcore variable functional tests Mattias Rönnblom
@ 2024-09-12  8:44                                 ` Mattias Rönnblom
  2024-09-12  9:39                                   ` Morten Brørup
  2024-09-12 13:09                                   ` Jerin Jacob
  2024-09-12  8:44                                 ` [PATCH v3 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
                                                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add basic micro benchmark for lcore variables, in an attempt to assure
that the overhead isn't significantly greater than alternative
approaches, in scenarios where the benefits aren't expected to show up
(i.e., when plenty of cache is available compared to the working set
size of the per-lcore data).

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 app/test/meson.build           |   1 +
 app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
 2 files changed, 161 insertions(+)
 create mode 100644 app/test/test_lcore_var_perf.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 48279522f0..d4e0c59900 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -104,6 +104,7 @@ source_file_deps = {
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
     'test_lcore_var.c': [],
+    'test_lcore_var_perf.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
new file mode 100644
index 0000000000..ea1d7ba90b
--- /dev/null
+++ b/app/test/test_lcore_var_perf.c
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <stdio.h>
+
+#include <rte_cycles.h>
+#include <rte_lcore_var.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+struct lcore_state {
+	uint64_t a;
+	uint64_t b;
+	uint64_t sum;
+};
+
+static void
+init(struct lcore_state *state)
+{
+	state->a = rte_rand();
+	state->b = rte_rand();
+	state->sum = 0;
+}
+
+static __rte_always_inline void
+update(struct lcore_state *state)
+{
+	state->sum += state->a * state->b;
+}
+
+static RTE_DEFINE_PER_LCORE(struct lcore_state, tls_lcore_state);
+
+static void
+tls_init(void)
+{
+	init(&RTE_PER_LCORE(tls_lcore_state));
+}
+
+static __rte_noinline void
+tls_update(void)
+{
+	update(&RTE_PER_LCORE(tls_lcore_state));
+}
+
+struct __rte_cache_aligned lcore_state_aligned {
+	uint64_t a;
+	uint64_t b;
+	uint64_t sum;
+};
+
+static struct lcore_state_aligned sarray_lcore_state[RTE_MAX_LCORE];
+
+static void
+sarray_init(void)
+{
+	struct lcore_state *state =
+		(struct lcore_state *)&sarray_lcore_state[rte_lcore_id()];
+
+	init(state);
+}
+
+static __rte_noinline void
+sarray_update(void)
+{
+	struct lcore_state *state =
+		(struct lcore_state *)&sarray_lcore_state[rte_lcore_id()];
+
+	update(state);
+}
+
+RTE_LCORE_VAR_HANDLE(struct lcore_state, lvar_lcore_state);
+
+static void
+lvar_init(void)
+{
+	RTE_LCORE_VAR_ALLOC(lvar_lcore_state);
+
+	struct lcore_state *state = RTE_LCORE_VAR_VALUE(lvar_lcore_state);
+
+	init(state);
+}
+
+static __rte_noinline void
+lvar_update(void)
+{
+	struct lcore_state *state = RTE_LCORE_VAR_VALUE(lvar_lcore_state);
+
+	update(state);
+}
+
+#define ITERATIONS UINT64_C(10000000)
+
+static double
+benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
+{
+	uint64_t i;
+	uint64_t start;
+	uint64_t end;
+	double latency;
+
+	init_fun();
+
+	start = rte_get_timer_cycles();
+
+	for (i = 0; i < ITERATIONS; i++)
+		update_fun();
+
+	end = rte_get_timer_cycles();
+
+	latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
+
+	return latency;
+}
+
+static int
+test_lcore_var_access(void)
+{
+	/* Note: the potential performance benefit of lcore variables
+	 * compared thread-local storage or the use of statically
+	 * sized, lcore id-indexed arrays are not shorter latencies in
+	 * a scenario with low cache pressure, but rather fewer cache
+	 * misses in a real-world scenario, with extensive cache
+	 * usage. These tests just tries to assure that the lcore
+	 * variable overhead is not significantly greater other
+	 * alternatives, when the per-lcore data is in L1.
+	 */
+	double tls_latency;
+	double sarray_latency;
+	double lvar_latency;
+
+	tls_latency = benchmark_access_method(tls_init, tls_update);
+	sarray_latency = benchmark_access_method(sarray_init, sarray_update);
+	lvar_latency = benchmark_access_method(lvar_init, lvar_update);
+
+	printf("Latencies [ns/update]\n");
+	printf("Thread-local storage  Static array  Lcore variables\n");
+	printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
+	       sarray_latency * 1e9, lvar_latency * 1e9);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable perf autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_lcore_var_access),
+		TEST_CASES_END()
+	},
+};
+
+static int
+test_lcore_var_perf(void)
+{
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_PERF_TEST(lcore_var_perf_autotest, test_lcore_var_perf);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 4/7] random: keep PRNG state in lcore variable
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
                                                   ` (2 preceding siblings ...)
  2024-09-12  8:44                                 ` [PATCH v3 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-12  8:44                                 ` Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 5/7] power: keep per-lcore " Mattias Rönnblom
                                                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 5/7] power: keep per-lcore state in lcore variable
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
                                                   ` (3 preceding siblings ...)
  2024-09-12  8:44                                 ` [PATCH v3 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-12  8:44                                 ` Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 6/7] service: " Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Replace for loop with FOREACH macro.
---
 lib/power/rte_power_pmd_mgmt.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a5139dd4f7 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 6/7] service: keep per-lcore state in lcore variable
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
                                                   ` (4 preceding siblings ...)
  2024-09-12  8:44                                 ` [PATCH v3 5/7] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-12  8:44                                 ` Mattias Rönnblom
  2024-09-12  8:44                                 ` [PATCH v3 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 115 +++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..03379f1588 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +449,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +462,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +484,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +530,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +546,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +567,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +584,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +636,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +688,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +706,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +731,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +755,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +779,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +809,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +818,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +879,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +895,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +942,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +971,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +983,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1022,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v3 7/7] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
                                                   ` (5 preceding siblings ...)
  2024-09-12  8:44                                 ` [PATCH v3 6/7] service: " Mattias Rönnblom
@ 2024-09-12  8:44                                 ` Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:44 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 2/6] eal: add lcore variable test suite
  2024-09-12  7:35                               ` Jerin Jacob
@ 2024-09-12  8:56                                 ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12  8:56 UTC (permalink / raw)
  To: Jerin Jacob, Mattias Rönnblom
  Cc: dev, Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand

On 2024-09-12 09:35, Jerin Jacob wrote:
> On Wed, Sep 11, 2024 at 11:08 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>>
>> Add test suite to exercise the <rte_lcore_var.h> API.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>
>> --
>>
>> RFC v5:
>>   * Adapt tests to reflect the removal of the GET() and SET() macros.
>>
>> RFC v4:
>>   * Check all lcore id's values for all variables in the many variables
>>     test case.
>>   * Introduce test case for max-sized lcore variables.
>>
>> RFC v2:
>>   * Improve alignment-related test coverage.
>> ---
>>   app/test/meson.build      |   1 +
>>   app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 433 insertions(+)
>>   create mode 100644 app/test/test_lcore_var.c
>>
>> diff --git a/app/test/meson.build b/app/test/meson.build
>> index e29258e6ec..48279522f0 100644
>> --- a/app/test/meson.build
>> +++ b/app/test/meson.build
>> @@ -103,6 +103,7 @@ source_file_deps = {
>>       'test_ipsec_sad.c': ['ipsec'],
>>       'test_kvargs.c': ['kvargs'],
>>       'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
>> +    'test_lcore_var.c': [],
>>       'test_lcores.c': [],
>>       'test_link_bonding.c': ['ethdev', 'net_bond',
>> +}
>> +
>> +REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
> 
> IMO, Good to add one perf test suite for the operations like other
> library calls. It may be compared with TLS on same operation.
> So that end users can decide to use the scheme based on their use
> case, and we get performance test case to avoid future regression
> for this library.
> 

OK. I've added a micro benchmark.

> It may not show any difference in numbers, but once we have self
> monitoring performance counters[1] it can in the future.
> [1[]
> https://patches.dpdk.org/project/dpdk/patch/20230201131757.1787527-1-tduszynski@marvell.com/
> 
> 
> 
> 
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-11 17:04                             ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-12  2:33                               ` fengchengwen
  2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
@ 2024-09-12  9:10                               ` Morten Brørup
  2024-09-12 13:16                                 ` Jerin Jacob
  2 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-12  9:10 UTC (permalink / raw)
  To: Mattias Rönnblom, dev, Jerin Jacob, Chengwen Feng
  Cc: Mattias Rönnblom, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Anatoly Burakov

> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)

Considering hugepages...

Lcore variables may be allocated before DPDK's memory allocator (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.

And lcore variables are not usable (shared) for DPDK multi-process, so the lcore_buffer could be allocated through the O/S APIs as anonymous hugepages, instead of using rte_malloc().

The alternative, using rte_malloc(), would disallow allocating lcore variables before DPDK's memory allocator has been initialized, which I think is too late.

Anyway, hugepages is not a "must have" here, it is a "nice to have". It can be added to the lcore variables subsystem at a later time.


Here are some thoughts about optimizing for TLB entry usage...

If lcore variables use hugepages, and LCORE_BUFFER_SIZE matches the hugepage size (2 MB), all the lcore variables will only consume 1 hugepage TLB entry.
However, this may limit the max size of an lcore variable (RTE_MAX_LCORE_VAR) too much, if the system supports many lcores (RTE_MAX_LCORE).
E.g. with 1024 lcores, the max size of an lcore variable would be 2048 bytes.
And with 128 lcores, the max size of an lcore variable would be 16 KB.

So if we want to optimize for hugepage TLB entry usage, the question becomes: What is a reasonable max size of an lcore variable?

And although hugepages is only a "nice to have", the max size of an lcore variable (RTE_MAX_LCORE_VAR) is part of the API/ABI, so we should consider it now, if we want to optimize for hugepage TLB entry usage in the future.


A few more comments below, not related to hugepages.

> +
> +static void *lcore_buffer;
> +static size_t offset = RTE_MAX_LCORE_VAR;
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +	void *handle;
> +	void *value;
> +
> +	offset = RTE_ALIGN_CEIL(offset, align);
> +
> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> +#ifdef RTE_EXEC_ENV_WINDOWS
> +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> +					       RTE_CACHE_LINE_SIZE);
> +#else
> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> +					     LCORE_BUFFER_SIZE);
> +#endif
> +		RTE_VERIFY(lcore_buffer != NULL);
> +
> +		offset = 0;
> +	}
> +
> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> +
> +	offset += size;
> +
> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> +		memset(value, 0, size);
> +
> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with
> a "
> +		"%"PRIuPTR"-byte alignment", size, align);
> +
> +	return handle;
> +}
> +
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align)
> +{
> +	/* Having the per-lcore buffer size aligned on cache lines
> +	 * assures as well as having the base pointer aligned on cache
> +	 * size assures that aligned offsets also translate to alipgned
> +	 * pointers across all values.
> +	 */
> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> +	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);

This specific RTE_ASSERT() should be upgraded to RTE_VERIFY(), so it is checked in non-debug builds too.
The code is slow path and not inline, and if this check doesn't pass, accessing the lcore variable will cause a buffer overrun. Prefer failing early.

> +
> +	/* '0' means asking for worst-case alignment requirements */
> +	if (align == 0)
> +		align = alignof(max_align_t);
> +
> +	RTE_ASSERT(rte_is_power_of_2(align));
> +
> +	return lcore_var_alloc(size, align);
> +}


> +/**
> + * Allocate space in the per-lcore id buffers for an lcore variable.
> + *
> + * The pointer returned is only an opaque identifer of the variable. To
> + * get an actual pointer to a particular instance of the variable use
> + * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
> + *
> + * The lcore variable values' memory is set to zero.
> + *
> + * The allocation is always successful, barring a fatal exhaustion of
> + * the per-lcore id buffer space.
> + *
> + * rte_lcore_var_alloc() is not multi-thread safe.
> + *
> + * @param size
> + *   The size (in bytes) of the variable's per-lcore id value. Must be
> > 0.
> + * @param align
> + *   If 0, the values will be suitably aligned for any kind of type
> + *   (i.e., alignof(max_align_t)). Otherwise, the values will be
> aligned
> + *   on a multiple of *align*, which must be a power of 2 and equal or
> + *   less than @c RTE_CACHE_LINE_SIZE.
> + * @return
> + *   The variable's handle, stored in a void pointer value. The value
> + *   is always non-NULL.
> + */
> +__rte_experimental

I don't know how useful these are, but consider adding:
#ifndef RTE_TOOLCHAIN_MSVC
__attribute__((malloc))
__attribute__((alloc_size(1)))
__attribute__((alloc_align(2)))
__attribute__((returns_nonnull))
#endif

> +void *
> +rte_lcore_var_alloc(size_t size, size_t align);


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-12  8:44                                 ` [PATCH v3 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-12  9:39                                   ` Morten Brørup
  2024-09-12 13:01                                     ` Mattias Rönnblom
  2024-09-12 13:09                                   ` Jerin Jacob
  1 sibling, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-12  9:39 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

> +struct lcore_state {
> +	uint64_t a;
> +	uint64_t b;
> +	uint64_t sum;
> +};
> +
> +static __rte_always_inline void
> +update(struct lcore_state *state)
> +{
> +	state->sum += state->a * state->b;
> +}
> +
> +static RTE_DEFINE_PER_LCORE(struct lcore_state, tls_lcore_state);
> +
> +static __rte_noinline void
> +tls_update(void)
> +{
> +	update(&RTE_PER_LCORE(tls_lcore_state));

I would normally access TLS variables directly, not through a pointer, i.e.:

RTE_PER_LCORE(tls_lcore_state.sum) += RTE_PER_LCORE(tls_lcore_state.a) * RTE_PER_LCORE(tls_lcore_state.b);

On the other hand, then it wouldn't be 1:1 comparable to the two other test cases.

Besides, I expect the compiler to optimize away the indirect access, and produce the same output (as for the alternative implementation) anyway.

No change requested. Just noticing.

> +}
> +
> +struct __rte_cache_aligned lcore_state_aligned {
> +	uint64_t a;
> +	uint64_t b;
> +	uint64_t sum;

Please add RTE_CACHE_GUARD here, for 100 % matching the common design pattern.

> +};
> +
> +static struct lcore_state_aligned sarray_lcore_state[RTE_MAX_LCORE];


> +	printf("Latencies [ns/update]\n");
> +	printf("Thread-local storage  Static array  Lcore variables\n");
> +	printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> +	       sarray_latency * 1e9, lvar_latency * 1e9);

I prefer cycles over ns. Perhaps you could show both?


With RTE_CACHE_GUARD added where mentioned,

Acked-by: Morten Brørup <mb@smartsharesystems.com>


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-12  9:39                                   ` Morten Brørup
@ 2024-09-12 13:01                                     ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12 13:01 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand, Jerin Jacob

On 2024-09-12 11:39, Morten Brørup wrote:
>> +struct lcore_state {
>> +	uint64_t a;
>> +	uint64_t b;
>> +	uint64_t sum;
>> +};
>> +
>> +static __rte_always_inline void
>> +update(struct lcore_state *state)
>> +{
>> +	state->sum += state->a * state->b;
>> +}
>> +
>> +static RTE_DEFINE_PER_LCORE(struct lcore_state, tls_lcore_state);
>> +
>> +static __rte_noinline void
>> +tls_update(void)
>> +{
>> +	update(&RTE_PER_LCORE(tls_lcore_state));
> 
> I would normally access TLS variables directly, not through a pointer, i.e.:
> 
> RTE_PER_LCORE(tls_lcore_state.sum) += RTE_PER_LCORE(tls_lcore_state.a) * RTE_PER_LCORE(tls_lcore_state.b);
> 
> On the other hand, then it wouldn't be 1:1 comparable to the two other test cases.
> 
> Besides, I expect the compiler to optimize away the indirect access, and produce the same output (as for the alternative implementation) anyway.
> 
> No change requested. Just noticing.
> 
>> +}
>> +
>> +struct __rte_cache_aligned lcore_state_aligned {
>> +	uint64_t a;
>> +	uint64_t b;
>> +	uint64_t sum;
> 
> Please add RTE_CACHE_GUARD here, for 100 % matching the common design pattern.
> 

Will do.

>> +};
>> +
>> +static struct lcore_state_aligned sarray_lcore_state[RTE_MAX_LCORE];
> 
> 
>> +	printf("Latencies [ns/update]\n");
>> +	printf("Thread-local storage  Static array  Lcore variables\n");
>> +	printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
>> +	       sarray_latency * 1e9, lvar_latency * 1e9);
> 
> I prefer cycles over ns. Perhaps you could show both?
> 

That's makes you an x86 guy. :) Since only on x86 those cycles makes any 
sense.

I didn't want to use cycles since it would be a very small value on 
certain (e.g., old ARM) platforms.

But, elsewhere in the perf tests TSC cycles are used, so maybe I should 
switch to using such nevertheless.

> 
> With RTE_CACHE_GUARD added where mentioned,
> 
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-12  8:44                                 ` [PATCH v3 3/7] eal: add lcore variable performance test Mattias Rönnblom
  2024-09-12  9:39                                   ` Morten Brørup
@ 2024-09-12 13:09                                   ` Jerin Jacob
  2024-09-12 13:20                                     ` Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Jerin Jacob @ 2024-09-12 13:09 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: dev, hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob

On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Add basic micro benchmark for lcore variables, in an attempt to assure
> that the overhead isn't significantly greater than alternative
> approaches, in scenarios where the benefits aren't expected to show up
> (i.e., when plenty of cache is available compared to the working set
> size of the per-lcore data).
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---
>  app/test/meson.build           |   1 +
>  app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
>  2 files changed, 161 insertions(+)
>  create mode 100644 app/test/test_lcore_var_perf.c


> +static double
> +benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
> +{
> +       uint64_t i;
> +       uint64_t start;
> +       uint64_t end;
> +       double latency;
> +
> +       init_fun();
> +
> +       start = rte_get_timer_cycles();
> +
> +       for (i = 0; i < ITERATIONS; i++)
> +               update_fun();
> +
> +       end = rte_get_timer_cycles();

Use precise variant. rte_rdtsc_precise() or so to be accurate

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-12  9:10                               ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Morten Brørup
@ 2024-09-12 13:16                                 ` Jerin Jacob
  2024-09-12 13:41                                   ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Jerin Jacob @ 2024-09-12 13:16 UTC (permalink / raw)
  To: Morten Brørup
  Cc: Mattias Rönnblom, dev, Chengwen Feng, Mattias Rönnblom,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Anatoly Burakov

On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>
> Considering hugepages...
>
> Lcore variables may be allocated before DPDK's memory allocator (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
>
> And lcore variables are not usable (shared) for DPDK multi-process, so the lcore_buffer could be allocated through the O/S APIs as anonymous hugepages, instead of using rte_malloc().
>
> The alternative, using rte_malloc(), would disallow allocating lcore variables before DPDK's memory allocator has been initialized, which I think is too late.

I thought it is not. A lot of the subsystems are initialized after the
memory subsystem is initialized.
[1] example given in documentation. I thought, RTE_INIT needs to
replaced if the subsystem called after memory initialized (which is
the case for most of the libraries)
Trace library had a similar situation. It is managed like [2]



[1]
 * struct foo_lcore_state {
 *         int a;
 *         long b;
 * };
 *
 * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
 *
 * long foo_get_a_plus_b(void)
 * {
 *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
 *
 *         return state->a + state->b;
 * }
 *
 * RTE_INIT(rte_foo_init)
 * {
 *         RTE_LCORE_VAR_ALLOC(lcore_states);
 *
 *         struct foo_lcore_state *state;
 *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
 *                 (initialize 'state')
 *         }
 *
 *         (other initialization)
 * }


[2]


        /* First attempt from huge page */
        header = eal_malloc_no_trace(NULL, trace_mem_sz(trace->buff_len), 8);
        if (header) {
                trace->lcore_meta[count].area = TRACE_AREA_HUGEPAGE;
                goto found;
        }

        /* Second attempt from heap */
        header = malloc(trace_mem_sz(trace->buff_len));
        if (header == NULL) {
                trace_crit("trace mem malloc attempt failed");
                header = NULL;
                goto fail;

        }

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-12 13:09                                   ` Jerin Jacob
@ 2024-09-12 13:20                                     ` Mattias Rönnblom
  2024-09-12 15:11                                       ` Jerin Jacob
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-12 13:20 UTC (permalink / raw)
  To: Jerin Jacob, Mattias Rönnblom
  Cc: dev, Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob

On 2024-09-12 15:09, Jerin Jacob wrote:
> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>>
>> Add basic micro benchmark for lcore variables, in an attempt to assure
>> that the overhead isn't significantly greater than alternative
>> approaches, in scenarios where the benefits aren't expected to show up
>> (i.e., when plenty of cache is available compared to the working set
>> size of the per-lcore data).
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
>>   app/test/meson.build           |   1 +
>>   app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
>>   2 files changed, 161 insertions(+)
>>   create mode 100644 app/test/test_lcore_var_perf.c
> 
> 
>> +static double
>> +benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
>> +{
>> +       uint64_t i;
>> +       uint64_t start;
>> +       uint64_t end;
>> +       double latency;
>> +
>> +       init_fun();
>> +
>> +       start = rte_get_timer_cycles();
>> +
>> +       for (i = 0; i < ITERATIONS; i++)
>> +               update_fun();
>> +
>> +       end = rte_get_timer_cycles();
> 
> Use precise variant. rte_rdtsc_precise() or so to be accurate

With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-12 13:16                                 ` Jerin Jacob
@ 2024-09-12 13:41                                   ` Morten Brørup
  2024-09-12 15:22                                     ` Jerin Jacob
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-12 13:41 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Mattias Rönnblom, dev, Chengwen Feng, Mattias Rönnblom,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Anatoly Burakov

> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> Sent: Thursday, 12 September 2024 15.17
> 
> On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
> wrote:
> >
> > > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> >
> > Considering hugepages...
> >
> > Lcore variables may be allocated before DPDK's memory allocator
> (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
> >
> > And lcore variables are not usable (shared) for DPDK multi-process, so the
> lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
> instead of using rte_malloc().
> >
> > The alternative, using rte_malloc(), would disallow allocating lcore
> variables before DPDK's memory allocator has been initialized, which I think
> is too late.
> 
> I thought it is not. A lot of the subsystems are initialized after the
> memory subsystem is initialized.
> [1] example given in documentation. I thought, RTE_INIT needs to
> replaced if the subsystem called after memory initialized (which is
> the case for most of the libraries)

The list of RTE_INIT functions are called before main(). It is not very useful.

Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.

DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.

> Trace library had a similar situation. It is managed like [2]

Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.
Although I don't like this alternative, it might be viable.

> 
> 
> 
> [1]
>  * struct foo_lcore_state {
>  *         int a;
>  *         long b;
>  * };
>  *
>  * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
>  *
>  * long foo_get_a_plus_b(void)
>  * {
>  *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
>  *
>  *         return state->a + state->b;
>  * }
>  *
>  * RTE_INIT(rte_foo_init)
>  * {
>  *         RTE_LCORE_VAR_ALLOC(lcore_states);
>  *
>  *         struct foo_lcore_state *state;
>  *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
>  *                 (initialize 'state')
>  *         }
>  *
>  *         (other initialization)
>  * }
> 
> 
> [2]
> 
> 
>         /* First attempt from huge page */
>         header = eal_malloc_no_trace(NULL, trace_mem_sz(trace->buff_len), 8);
>         if (header) {
>                 trace->lcore_meta[count].area = TRACE_AREA_HUGEPAGE;
>                 goto found;
>         }
> 
>         /* Second attempt from heap */
>         header = malloc(trace_mem_sz(trace->buff_len));
>         if (header == NULL) {
>                 trace_crit("trace mem malloc attempt failed");
>                 header = NULL;
>                 goto fail;
> 
>         }

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-12 13:20                                     ` Mattias Rönnblom
@ 2024-09-12 15:11                                       ` Jerin Jacob
  2024-09-13  6:47                                         ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Jerin Jacob @ 2024-09-12 15:11 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Mattias Rönnblom, dev, Morten Brørup,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2024-09-12 15:09, Jerin Jacob wrote:
> > On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> >>
> >> Add basic micro benchmark for lcore variables, in an attempt to assure
> >> that the overhead isn't significantly greater than alternative
> >> approaches, in scenarios where the benefits aren't expected to show up
> >> (i.e., when plenty of cache is available compared to the working set
> >> size of the per-lcore data).
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> ---
> >>   app/test/meson.build           |   1 +
> >>   app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
> >>   2 files changed, 161 insertions(+)
> >>   create mode 100644 app/test/test_lcore_var_perf.c
> >
> >
> >> +static double
> >> +benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
> >> +{
> >> +       uint64_t i;
> >> +       uint64_t start;
> >> +       uint64_t end;
> >> +       double latency;
> >> +
> >> +       init_fun();
> >> +
> >> +       start = rte_get_timer_cycles();
> >> +
> >> +       for (i = 0; i < ITERATIONS; i++)
> >> +               update_fun();
> >> +
> >> +       end = rte_get_timer_cycles();
> >
> > Use precise variant. rte_rdtsc_precise() or so to be accurate
>
> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.

I was thinking in another way, with 1e7 iteration, the additional
barrier on precise will be amortized, and we get more _deterministic_
behavior e.s.p in case if we print cycles and if we need to catch
regressions.
Furthermore, you may consider replacing rte_random() in fast path to
running number or so if it is not deterministic in cycle computation.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-12 13:41                                   ` Morten Brørup
@ 2024-09-12 15:22                                     ` Jerin Jacob
  2024-09-18 10:11                                       ` Jerin Jacob
  0 siblings, 1 reply; 185+ messages in thread
From: Jerin Jacob @ 2024-09-12 15:22 UTC (permalink / raw)
  To: Morten Brørup
  Cc: Mattias Rönnblom, dev, Chengwen Feng, Mattias Rönnblom,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Anatoly Burakov

On Thu, Sep 12, 2024 at 7:11 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > Sent: Thursday, 12 September 2024 15.17
> >
> > On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
> > wrote:
> > >
> > > > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> > >
> > > Considering hugepages...
> > >
> > > Lcore variables may be allocated before DPDK's memory allocator
> > (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
> > >
> > > And lcore variables are not usable (shared) for DPDK multi-process, so the
> > lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
> > instead of using rte_malloc().
> > >
> > > The alternative, using rte_malloc(), would disallow allocating lcore
> > variables before DPDK's memory allocator has been initialized, which I think
> > is too late.
> >
> > I thought it is not. A lot of the subsystems are initialized after the
> > memory subsystem is initialized.
> > [1] example given in documentation. I thought, RTE_INIT needs to
> > replaced if the subsystem called after memory initialized (which is
> > the case for most of the libraries)
>
> The list of RTE_INIT functions are called before main(). It is not very useful.
>
> Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.
>
> DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.
>
> > Trace library had a similar situation. It is managed like [2]
>
> Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.

I was not insisting on using ONLY rte_malloc(). Since rte_malloc() can
be called before rte_eal_init)(it will return NULL). Alloc routine can
check first rte_malloc() is available if not switch over glibc.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-12 15:11                                       ` Jerin Jacob
@ 2024-09-13  6:47                                         ` Mattias Rönnblom
  2024-09-13 11:23                                           ` Jerin Jacob
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-13  6:47 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Mattias Rönnblom, dev, Morten Brørup,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

On 2024-09-12 17:11, Jerin Jacob wrote:
> On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>>
>> On 2024-09-12 15:09, Jerin Jacob wrote:
>>> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>
>>>> Add basic micro benchmark for lcore variables, in an attempt to assure
>>>> that the overhead isn't significantly greater than alternative
>>>> approaches, in scenarios where the benefits aren't expected to show up
>>>> (i.e., when plenty of cache is available compared to the working set
>>>> size of the per-lcore data).
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> ---
>>>>    app/test/meson.build           |   1 +
>>>>    app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
>>>>    2 files changed, 161 insertions(+)
>>>>    create mode 100644 app/test/test_lcore_var_perf.c
>>>
>>>
>>>> +static double
>>>> +benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
>>>> +{
>>>> +       uint64_t i;
>>>> +       uint64_t start;
>>>> +       uint64_t end;
>>>> +       double latency;
>>>> +
>>>> +       init_fun();
>>>> +
>>>> +       start = rte_get_timer_cycles();
>>>> +
>>>> +       for (i = 0; i < ITERATIONS; i++)
>>>> +               update_fun();
>>>> +
>>>> +       end = rte_get_timer_cycles();
>>>
>>> Use precise variant. rte_rdtsc_precise() or so to be accurate
>>
>> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.
> 
> I was thinking in another way, with 1e7 iteration, the additional
> barrier on precise will be amortized, and we get more _deterministic_
> behavior e.s.p in case if we print cycles and if we need to catch
> regressions.

If you time a section of code which spends ~40000000 cycles, it doesn't 
matter if you add or remove a few cycles at the beginning and the end.

The rte_rdtsc_precise() is both better (more precise in the sense of 
more serialization), and worse (because it's more costly, and thus more 
intrusive).

You can use rte_rdtsc_precise(), rte_rdtsc(), or gettimeofday(). It 
doesn't matter.

> Furthermore, you may consider replacing rte_random() in fast path to
> running number or so if it is not deterministic in cycle computation.

rte_rand() is not used in the fast path. I don't understand what you 
mean by "running number".

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-13  6:47                                         ` Mattias Rönnblom
@ 2024-09-13 11:23                                           ` Jerin Jacob
  2024-09-13 14:40                                             ` Morten Brørup
  2024-09-16 10:50                                             ` Mattias Rönnblom
  0 siblings, 2 replies; 185+ messages in thread
From: Jerin Jacob @ 2024-09-13 11:23 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Mattias Rönnblom, dev, Morten Brørup,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

On Fri, Sep 13, 2024 at 12:17 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2024-09-12 17:11, Jerin Jacob wrote:
> > On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
> >>
> >> On 2024-09-12 15:09, Jerin Jacob wrote:
> >>> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
> >>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>
> >>>> Add basic micro benchmark for lcore variables, in an attempt to assure
> >>>> that the overhead isn't significantly greater than alternative
> >>>> approaches, in scenarios where the benefits aren't expected to show up
> >>>> (i.e., when plenty of cache is available compared to the working set
> >>>> size of the per-lcore data).
> >>>>
> >>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>> ---
> >>>>    app/test/meson.build           |   1 +
> >>>>    app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
> >>>>    2 files changed, 161 insertions(+)
> >>>>    create mode 100644 app/test/test_lcore_var_perf.c
> >>>
> >>>
> >>>> +static double
> >>>> +benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
> >>>> +{
> >>>> +       uint64_t i;
> >>>> +       uint64_t start;
> >>>> +       uint64_t end;
> >>>> +       double latency;
> >>>> +
> >>>> +       init_fun();
> >>>> +
> >>>> +       start = rte_get_timer_cycles();
> >>>> +
> >>>> +       for (i = 0; i < ITERATIONS; i++)
> >>>> +               update_fun();
> >>>> +
> >>>> +       end = rte_get_timer_cycles();
> >>>
> >>> Use precise variant. rte_rdtsc_precise() or so to be accurate
> >>
> >> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.
> >
> > I was thinking in another way, with 1e7 iteration, the additional
> > barrier on precise will be amortized, and we get more _deterministic_
> > behavior e.s.p in case if we print cycles and if we need to catch
> > regressions.
>
> If you time a section of code which spends ~40000000 cycles, it doesn't
> matter if you add or remove a few cycles at the beginning and the end.
>
> The rte_rdtsc_precise() is both better (more precise in the sense of
> more serialization), and worse (because it's more costly, and thus more
> intrusive).

We can calibrate the overhead to remove the cost.

>
> You can use rte_rdtsc_precise(), rte_rdtsc(), or gettimeofday(). It
> doesn't matter.

Yes. In this setup and it is pretty inaccurate PER iteration. Please
refer to the below patch to see the difference.

Patch 1: Make nanoseconds to cycles per iteration
------------------------------------------------------------------

diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
index ea1d7ba90b52..b8d25400f593 100644
--- a/app/test/test_lcore_var_perf.c
+++ b/app/test/test_lcore_var_perf.c
@@ -110,7 +110,7 @@ benchmark_access_method(void (*init_fun)(void),
void (*update_fun)(void))

        end = rte_get_timer_cycles();

-       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
+       latency = ((end - start)) / ITERATIONS;

        return latency;
 }
@@ -137,8 +137,7 @@ test_lcore_var_access(void)

-       printf("Latencies [ns/update]\n");
+       printf("Latencies [cycles/update]\n");
        printf("Thread-local storage  Static array  Lcore variables\n");
-       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
-              sarray_latency * 1e9, lvar_latency * 1e9);
+       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
lvar_latency);

        return TEST_SUCCESS;
 }


Patch 2: Change to precise with calibration
-----------------------------------------------------------

diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
index ea1d7ba90b52..8142ecd56241 100644
--- a/app/test/test_lcore_var_perf.c
+++ b/app/test/test_lcore_var_perf.c
@@ -96,23 +96,28 @@ lvar_update(void)
 static double
 benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
 {
-       uint64_t i;
+       double tsc_latency;
+       double latency;
        uint64_t start;
        uint64_t end;
-       double latency;
+       uint64_t i;

-       init_fun();
+       /* calculate rte_rdtsc_precise overhead */
+       start = rte_rdtsc_precise();
+       end = rte_rdtsc_precise();
+       tsc_latency = (end - start);

-       start = rte_get_timer_cycles();
+       init_fun();

-       for (i = 0; i < ITERATIONS; i++)
+       latency = 0;
+       for (i = 0; i < ITERATIONS; i++) {
+               start = rte_rdtsc_precise();
                update_fun();
+               end = rte_rdtsc_precise();
+               latency += (end - start) - tsc_latency;
+       }

-       end = rte_get_timer_cycles();
-
-       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
-
-       return latency;
+       return latency / (double)ITERATIONS;
 }

 static int
@@ -135,10 +140,9 @@ test_lcore_var_access(void)
        sarray_latency = benchmark_access_method(sarray_init, sarray_update);
        lvar_latency = benchmark_access_method(lvar_init, lvar_update);

-       printf("Latencies [ns/update]\n");
+       printf("Latencies [cycles/update]\n");
        printf("Thread-local storage  Static array  Lcore variables\n");
-       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
-              sarray_latency * 1e9, lvar_latency * 1e9);
+       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
lvar_latency);

        return TEST_SUCCESS;
 }

ARM N2 core with patch 1(aka current scheme)
-----------------------------------

 + ------------------------------------------------------- +
 + Test Suite : lcore variable perf autotest
 + ------------------------------------------------------- +
Latencies [cycles/update]
Thread-local storage  Static array  Lcore variables
                 7.0           7.0              7.0


ARM N2 core with patch 2
-----------------------------------

 + ------------------------------------------------------- +
 + Test Suite : lcore variable perf autotest
 + ------------------------------------------------------- +
Latencies [cycles/update]
Thread-local storage  Static array  Lcore variables
                11.4          15.5             15.5

x86 i9 core with patch 1(aka current scheme)
------------------------------------------------------------

 + ------------------------------------------------------- +
 + Test Suite : lcore variable perf autotest
 + ------------------------------------------------------- +
Latencies [ns/update]
Thread-local storage  Static array  Lcore variables
                 5.0           6.0              6.0

x86 i9 core with patch 2
--------------------------------
 + ------------------------------------------------------- +
 + Test Suite : lcore variable perf autotest
 + ------------------------------------------------------- +
Latencies [cycles/update]
Thread-local storage  Static array  Lcore variables
                 5.3          10.6             11.7





>
> > Furthermore, you may consider replacing rte_random() in fast path to
> > running number or so if it is not deterministic in cycle computation.
>
> rte_rand() is not used in the fast path. I don't understand what you

I missed that. Ignore this comment.

> mean by "running number".

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-13 11:23                                           ` Jerin Jacob
@ 2024-09-13 14:40                                             ` Morten Brørup
  2024-09-16  8:12                                               ` Jerin Jacob
  2024-09-16 10:50                                             ` Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-13 14:40 UTC (permalink / raw)
  To: Jerin Jacob, Mattias Rönnblom
  Cc: Mattias Rönnblom, dev, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob

> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> Sent: Friday, 13 September 2024 13.24
> 
> On Fri, Sep 13, 2024 at 12:17 PM Mattias Rönnblom <hofors@lysator.liu.se>
> wrote:
> >
> > On 2024-09-12 17:11, Jerin Jacob wrote:
> > > On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom <hofors@lysator.liu.se>
> wrote:
> > >>
> > >> On 2024-09-12 15:09, Jerin Jacob wrote:
> > >>> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
> > >>> <mattias.ronnblom@ericsson.com> wrote:
> > >>>> +static double
> > >>>> +benchmark_access_method(void (*init_fun)(void), void
> (*update_fun)(void))
> > >>>> +{
> > >>>> +       uint64_t i;
> > >>>> +       uint64_t start;
> > >>>> +       uint64_t end;
> > >>>> +       double latency;
> > >>>> +
> > >>>> +       init_fun();
> > >>>> +
> > >>>> +       start = rte_get_timer_cycles();
> > >>>> +
> > >>>> +       for (i = 0; i < ITERATIONS; i++)
> > >>>> +               update_fun();
> > >>>> +
> > >>>> +       end = rte_get_timer_cycles();
> > >>>
> > >>> Use precise variant. rte_rdtsc_precise() or so to be accurate
> > >>
> > >> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.
> > >
> > > I was thinking in another way, with 1e7 iteration, the additional
> > > barrier on precise will be amortized, and we get more _deterministic_
> > > behavior e.s.p in case if we print cycles and if we need to catch
> > > regressions.
> >
> > If you time a section of code which spends ~40000000 cycles, it doesn't
> > matter if you add or remove a few cycles at the beginning and the end.
> >
> > The rte_rdtsc_precise() is both better (more precise in the sense of
> > more serialization), and worse (because it's more costly, and thus more
> > intrusive).
> 
> We can calibrate the overhead to remove the cost.
> 
> >
> > You can use rte_rdtsc_precise(), rte_rdtsc(), or gettimeofday(). It
> > doesn't matter.
> 
> Yes. In this setup and it is pretty inaccurate PER iteration. Please
> refer to the below patch to see the difference.

No, Mattias is right. The time is sampled once before the loop, then the function is executed 10 million (ITERATIONS) times in the loop, and then the time is sampled once again.

So the overhead and accuracy of the timing function is amortized across the 10 million calls to the function being measured, and becomes insignificant.

Other perf tests also do it this way, and also use rte_get_timer_cycles(). E.g. the mempool_perf test.

Another detail: The for loop itself may cost a few cycles, which may not be irrelevant when measuring a function using very few cycles. If the compiler doesn't unroll the loop, it should be done manually:

        for (i = 0; i < ITERATIONS / 100; i++) {
                update_fun();
                update_fun();
                ... repeated 100 times
        }


> 
> Patch 1: Make nanoseconds to cycles per iteration
> ------------------------------------------------------------------
> 
> diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
> index ea1d7ba90b52..b8d25400f593 100644
> --- a/app/test/test_lcore_var_perf.c
> +++ b/app/test/test_lcore_var_perf.c
> @@ -110,7 +110,7 @@ benchmark_access_method(void (*init_fun)(void),
> void (*update_fun)(void))
> 
>         end = rte_get_timer_cycles();
> 
> -       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
> +       latency = ((end - start)) / ITERATIONS;

This calculation uses integer arithmetic, which will round down the resulting latency.
Please use floating point arithmetic: latency = (end - start) / (double)ITERATIONS;

> 
>         return latency;
>  }
> @@ -137,8 +137,7 @@ test_lcore_var_access(void)
> 
> -       printf("Latencies [ns/update]\n");
> +       printf("Latencies [cycles/update]\n");
>         printf("Thread-local storage  Static array  Lcore variables\n");
> -       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> -              sarray_latency * 1e9, lvar_latency * 1e9);
> +       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
> lvar_latency);
> 
>         return TEST_SUCCESS;
>  }

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-13 14:40                                             ` Morten Brørup
@ 2024-09-16  8:12                                               ` Jerin Jacob
  2024-09-16  9:51                                                 ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Jerin Jacob @ 2024-09-16  8:12 UTC (permalink / raw)
  To: Morten Brørup
  Cc: Mattias Rönnblom, Mattias Rönnblom, dev,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

On Fri, Sep 13, 2024 at 8:10 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > Sent: Friday, 13 September 2024 13.24
> >
> > On Fri, Sep 13, 2024 at 12:17 PM Mattias Rönnblom <hofors@lysator.liu.se>
> > wrote:
> > >
> > > On 2024-09-12 17:11, Jerin Jacob wrote:
> > > > On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom <hofors@lysator.liu.se>
> > wrote:
> > > >>
> > > >> On 2024-09-12 15:09, Jerin Jacob wrote:
> > > >>> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
> > > >>> <mattias.ronnblom@ericsson.com> wrote:
> > > >>>> +static double
> > > >>>> +benchmark_access_method(void (*init_fun)(void), void
> > (*update_fun)(void))
> > > >>>> +{
> > > >>>> +       uint64_t i;
> > > >>>> +       uint64_t start;
> > > >>>> +       uint64_t end;
> > > >>>> +       double latency;
> > > >>>> +
> > > >>>> +       init_fun();
> > > >>>> +
> > > >>>> +       start = rte_get_timer_cycles();
> > > >>>> +
> > > >>>> +       for (i = 0; i < ITERATIONS; i++)
> > > >>>> +               update_fun();
> > > >>>> +
> > > >>>> +       end = rte_get_timer_cycles();
> > > >>>
> > > >>> Use precise variant. rte_rdtsc_precise() or so to be accurate
> > > >>
> > > >> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.
> > > >
> > > > I was thinking in another way, with 1e7 iteration, the additional
> > > > barrier on precise will be amortized, and we get more _deterministic_
> > > > behavior e.s.p in case if we print cycles and if we need to catch
> > > > regressions.
> > >
> > > If you time a section of code which spends ~40000000 cycles, it doesn't
> > > matter if you add or remove a few cycles at the beginning and the end.
> > >
> > > The rte_rdtsc_precise() is both better (more precise in the sense of
> > > more serialization), and worse (because it's more costly, and thus more
> > > intrusive).
> >
> > We can calibrate the overhead to remove the cost.
> >
> > >
> > > You can use rte_rdtsc_precise(), rte_rdtsc(), or gettimeofday(). It
> > > doesn't matter.
> >
> > Yes. In this setup and it is pretty inaccurate PER iteration. Please
> > refer to the below patch to see the difference.
>
> No, Mattias is right. The time is sampled once before the loop, then the function is executed 10 million (ITERATIONS) times in the loop, and then the time is sampled once again.

No. I am not disagreeing. That why I said, “Yes. In this setup”.

All I am saying, there is a more accurate way of doing measurement for
this test along with “data” at
https://mails.dpdk.org/archives/dev/2024-September/301227.html


>
> So the overhead and accuracy of the timing function is amortized across the 10 million calls to the function being measured, and becomes insignificant.
>
> Other perf tests also do it this way, and also use rte_get_timer_cycles(). E.g. the mempool_perf test.
>
> Another detail: The for loop itself may cost a few cycles, which may not be irrelevant when measuring a function using very few cycles. If the compiler doesn't unroll the loop, it should be done manually:
>
>         for (i = 0; i < ITERATIONS / 100; i++) {
>                 update_fun();
>                 update_fun();
>                 ... repeated 100 times

I have done a similar scheme for trace perf for inline function test
at https://github.com/DPDK/dpdk/blob/main/app/test/test_trace_perf.c#L30

Either the above scheme or the below scheme needs to be used as
mentioned in https://mails.dpdk.org/archives/dev/2024-September/301227.html

+       for (i = 0; i < ITERATIONS; i++) {
+               start = rte_rdtsc_precise();
                update_fun();
+               end = rte_rdtsc_precise();
+               latency += (end - start) - tsc_latency;
+       }




>         }
>
>
> >
> > Patch 1: Make nanoseconds to cycles per iteration
> > ------------------------------------------------------------------
> >
> > diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
> > index ea1d7ba90b52..b8d25400f593 100644
> > --- a/app/test/test_lcore_var_perf.c
> > +++ b/app/test/test_lcore_var_perf.c
> > @@ -110,7 +110,7 @@ benchmark_access_method(void (*init_fun)(void),
> > void (*update_fun)(void))
> >
> >         end = rte_get_timer_cycles();
> >
> > -       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
> > +       latency = ((end - start)) / ITERATIONS;
>
> This calculation uses integer arithmetic, which will round down the resulting latency.
> Please use floating point arithmetic: latency = (end - start) / (double)ITERATIONS;

Yup. It is in patch 2
https://mails.dpdk.org/archives/dev/2024-September/301227.html

>
> >
> >         return latency;
> >  }
> > @@ -137,8 +137,7 @@ test_lcore_var_access(void)
> >
> > -       printf("Latencies [ns/update]\n");
> > +       printf("Latencies [cycles/update]\n");
> >         printf("Thread-local storage  Static array  Lcore variables\n");
> > -       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> > -              sarray_latency * 1e9, lvar_latency * 1e9);
> > +       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
> > lvar_latency);
> >
> >         return TEST_SUCCESS;
> >  }

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-16  8:12                                               ` Jerin Jacob
@ 2024-09-16  9:51                                                 ` Morten Brørup
  0 siblings, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-09-16  9:51 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Mattias Rönnblom, Mattias Rönnblom, dev,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> Sent: Monday, 16 September 2024 10.12
> 
> On Fri, Sep 13, 2024 at 8:10 PM Morten Brørup <mb@smartsharesystems.com>
> wrote:
> >
> > > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > > Sent: Friday, 13 September 2024 13.24
> > >
> > > On Fri, Sep 13, 2024 at 12:17 PM Mattias Rönnblom <hofors@lysator.liu.se>
> > > wrote:
> > > >
> > > > On 2024-09-12 17:11, Jerin Jacob wrote:
> > > > > On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom
> <hofors@lysator.liu.se>
> > > wrote:
> > > > >>
> > > > >> On 2024-09-12 15:09, Jerin Jacob wrote:
> > > > >>> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
> > > > >>> <mattias.ronnblom@ericsson.com> wrote:
> > > > >>>> +static double
> > > > >>>> +benchmark_access_method(void (*init_fun)(void), void
> > > (*update_fun)(void))
> > > > >>>> +{
> > > > >>>> +       uint64_t i;
> > > > >>>> +       uint64_t start;
> > > > >>>> +       uint64_t end;
> > > > >>>> +       double latency;
> > > > >>>> +
> > > > >>>> +       init_fun();
> > > > >>>> +
> > > > >>>> +       start = rte_get_timer_cycles();
> > > > >>>> +
> > > > >>>> +       for (i = 0; i < ITERATIONS; i++)
> > > > >>>> +               update_fun();
> > > > >>>> +
> > > > >>>> +       end = rte_get_timer_cycles();
> > > > >>>
> > > > >>> Use precise variant. rte_rdtsc_precise() or so to be accurate
> > > > >>
> > > > >> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.
> > > > >
> > > > > I was thinking in another way, with 1e7 iteration, the additional
> > > > > barrier on precise will be amortized, and we get more _deterministic_
> > > > > behavior e.s.p in case if we print cycles and if we need to catch
> > > > > regressions.
> > > >
> > > > If you time a section of code which spends ~40000000 cycles, it doesn't
> > > > matter if you add or remove a few cycles at the beginning and the end.
> > > >
> > > > The rte_rdtsc_precise() is both better (more precise in the sense of
> > > > more serialization), and worse (because it's more costly, and thus more
> > > > intrusive).
> > >
> > > We can calibrate the overhead to remove the cost.
> > >
> > > >
> > > > You can use rte_rdtsc_precise(), rte_rdtsc(), or gettimeofday(). It
> > > > doesn't matter.
> > >
> > > Yes. In this setup and it is pretty inaccurate PER iteration. Please
> > > refer to the below patch to see the difference.
> >
> > No, Mattias is right. The time is sampled once before the loop, then the
> function is executed 10 million (ITERATIONS) times in the loop, and then the
> time is sampled once again.
> 
> No. I am not disagreeing. That why I said, “Yes. In this setup”.

Sorry, I misunderstood. Then we're all on the same page here. :-)

> 
> All I am saying, there is a more accurate way of doing measurement for
> this test along with “data” at
> https://mails.dpdk.org/archives/dev/2024-September/301227.html
> 
> 
> >
> > So the overhead and accuracy of the timing function is amortized across the
> 10 million calls to the function being measured, and becomes insignificant.
> >
> > Other perf tests also do it this way, and also use rte_get_timer_cycles().
> E.g. the mempool_perf test.
> >
> > Another detail: The for loop itself may cost a few cycles, which may not be
> irrelevant when measuring a function using very few cycles. If the compiler
> doesn't unroll the loop, it should be done manually:
> >
> >         for (i = 0; i < ITERATIONS / 100; i++) {
> >                 update_fun();
> >                 update_fun();
> >                 ... repeated 100 times
> 
> I have done a similar scheme for trace perf for inline function test
> at https://github.com/DPDK/dpdk/blob/main/app/test/test_trace_perf.c#L30

Nice macro. :-)

> 
> Either the above scheme or the below scheme needs to be used as
> mentioned in https://mails.dpdk.org/archives/dev/2024-September/301227.html
> 
> +       for (i = 0; i < ITERATIONS; i++) {
> +               start = rte_rdtsc_precise();
>                 update_fun();
> +               end = rte_rdtsc_precise();
> +               latency += (end - start) - tsc_latency;
> +       }
> 

I prefer reading the timestamps outside the loop.
If there is any jitter in the execution time (or cycles used) by rte_rdtsc_precise(), it gets amortized when used outside the loop. If used inside the loop, the jitter adds up, and may affect the result.

On the other hand, I guess using rte_rdtsc_precise() inside the loop may show different results, due to its memory barriers. I don't know; just speculating.

Maybe we want to use both methods to measure this? Considering that we are measuring the time to access frequently used variables in hot parts of the code, as implemented by three different design patterns. Performance here is quite important.

And if we want to subtract the overhead from rte_rdtsc_precise() itself - which I think is a good idea if used inside the loop - we probably need another loop to measure that, rather than just calling it twice and subtracting the returned values.

> 
> 
> 
> >         }
> >
> >
> > >
> > > Patch 1: Make nanoseconds to cycles per iteration
> > > ------------------------------------------------------------------
> > >
> > > diff --git a/app/test/test_lcore_var_perf.c
> b/app/test/test_lcore_var_perf.c
> > > index ea1d7ba90b52..b8d25400f593 100644
> > > --- a/app/test/test_lcore_var_perf.c
> > > +++ b/app/test/test_lcore_var_perf.c
> > > @@ -110,7 +110,7 @@ benchmark_access_method(void (*init_fun)(void),
> > > void (*update_fun)(void))
> > >
> > >         end = rte_get_timer_cycles();
> > >
> > > -       latency = ((end - start) / (double)rte_get_timer_hz()) /
> ITERATIONS;
> > > +       latency = ((end - start)) / ITERATIONS;
> >
> > This calculation uses integer arithmetic, which will round down the
> resulting latency.
> > Please use floating point arithmetic: latency = (end - start) /
> (double)ITERATIONS;
> 
> Yup. It is in patch 2
> https://mails.dpdk.org/archives/dev/2024-September/301227.html

Yep; my comment was mostly meant for Mattias, if he switched from nanoseconds to cycles, to remember using floating point calculation here.

> 
> >
> > >
> > >         return latency;
> > >  }
> > > @@ -137,8 +137,7 @@ test_lcore_var_access(void)
> > >
> > > -       printf("Latencies [ns/update]\n");
> > > +       printf("Latencies [cycles/update]\n");
> > >         printf("Thread-local storage  Static array  Lcore variables\n");
> > > -       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> > > -              sarray_latency * 1e9, lvar_latency * 1e9);
> > > +       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
> > > lvar_latency);
> > >
> > >         return TEST_SUCCESS;
> > >  }

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-13 11:23                                           ` Jerin Jacob
  2024-09-13 14:40                                             ` Morten Brørup
@ 2024-09-16 10:50                                             ` Mattias Rönnblom
  2024-09-18 10:04                                               ` Jerin Jacob
  1 sibling, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:50 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Mattias Rönnblom, dev, Morten Brørup,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

On 2024-09-13 13:23, Jerin Jacob wrote:
> On Fri, Sep 13, 2024 at 12:17 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>>
>> On 2024-09-12 17:11, Jerin Jacob wrote:
>>> On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>>>>
>>>> On 2024-09-12 15:09, Jerin Jacob wrote:
>>>>> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
>>>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>>>
>>>>>> Add basic micro benchmark for lcore variables, in an attempt to assure
>>>>>> that the overhead isn't significantly greater than alternative
>>>>>> approaches, in scenarios where the benefits aren't expected to show up
>>>>>> (i.e., when plenty of cache is available compared to the working set
>>>>>> size of the per-lcore data).
>>>>>>
>>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>>> ---
>>>>>>     app/test/meson.build           |   1 +
>>>>>>     app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
>>>>>>     2 files changed, 161 insertions(+)
>>>>>>     create mode 100644 app/test/test_lcore_var_perf.c
>>>>>
>>>>>
>>>>>> +static double
>>>>>> +benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
>>>>>> +{
>>>>>> +       uint64_t i;
>>>>>> +       uint64_t start;
>>>>>> +       uint64_t end;
>>>>>> +       double latency;
>>>>>> +
>>>>>> +       init_fun();
>>>>>> +
>>>>>> +       start = rte_get_timer_cycles();
>>>>>> +
>>>>>> +       for (i = 0; i < ITERATIONS; i++)
>>>>>> +               update_fun();
>>>>>> +
>>>>>> +       end = rte_get_timer_cycles();
>>>>>
>>>>> Use precise variant. rte_rdtsc_precise() or so to be accurate
>>>>
>>>> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.
>>>
>>> I was thinking in another way, with 1e7 iteration, the additional
>>> barrier on precise will be amortized, and we get more _deterministic_
>>> behavior e.s.p in case if we print cycles and if we need to catch
>>> regressions.
>>
>> If you time a section of code which spends ~40000000 cycles, it doesn't
>> matter if you add or remove a few cycles at the beginning and the end.
>>
>> The rte_rdtsc_precise() is both better (more precise in the sense of
>> more serialization), and worse (because it's more costly, and thus more
>> intrusive).
> 
> We can calibrate the overhead to remove the cost.
> 
What you are interested is primarily the impact of (instruction) 
throughput, not the latency of the sequence of instructions that must be 
retired in order to load the lcore variable values, when you switch from
(say) lcore id-index static arrays to lcore variables in your module.

Usually, there is not reason to make a distinction between latency and 
throughput in this context, but as you zoom into very short snippets of 
code being executed, the difference becomes relevant. For example, 
adding an div instruction won't necessarily add 12 cc to your program's 
execution time on a Zen 4, even though that is its latency. Rather, the 
effects may, depending on data dependencies and what other instructions 
are executed in parallel, be much smaller.

So, one could argue the ILP you get with the loop is a feature, not a bug.

With or without per-iteration latency measurements, these benchmark are 
not-very-useful at best, and misleading at worst. I will rework them to 
include more than a single module/lcore variable, which I think would be 
somewhat of an improvement.

Even better would have some real domain logic, instead of just a dummy 
multiplication.

>>
>> You can use rte_rdtsc_precise(), rte_rdtsc(), or gettimeofday(). It
>> doesn't matter.
> 
> Yes. In this setup and it is pretty inaccurate PER iteration. Please
> refer to the below patch to see the difference.
> 
> Patch 1: Make nanoseconds to cycles per iteration
> ------------------------------------------------------------------
> 
> diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
> index ea1d7ba90b52..b8d25400f593 100644
> --- a/app/test/test_lcore_var_perf.c
> +++ b/app/test/test_lcore_var_perf.c
> @@ -110,7 +110,7 @@ benchmark_access_method(void (*init_fun)(void),
> void (*update_fun)(void))
> 
>          end = rte_get_timer_cycles();
> 
> -       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
> +       latency = ((end - start)) / ITERATIONS;
> 
>          return latency;
>   }
> @@ -137,8 +137,7 @@ test_lcore_var_access(void)
> 
> -       printf("Latencies [ns/update]\n");
> +       printf("Latencies [cycles/update]\n");
>          printf("Thread-local storage  Static array  Lcore variables\n");
> -       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> -              sarray_latency * 1e9, lvar_latency * 1e9);
> +       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
> lvar_latency);
> 
>          return TEST_SUCCESS;
>   }
> 
> 
> Patch 2: Change to precise with calibration
> -----------------------------------------------------------
> 
> diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
> index ea1d7ba90b52..8142ecd56241 100644
> --- a/app/test/test_lcore_var_perf.c
> +++ b/app/test/test_lcore_var_perf.c
> @@ -96,23 +96,28 @@ lvar_update(void)
>   static double
>   benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
>   {
> -       uint64_t i;
> +       double tsc_latency;
> +       double latency;
>          uint64_t start;
>          uint64_t end;
> -       double latency;
> +       uint64_t i;
> 
> -       init_fun();
> +       /* calculate rte_rdtsc_precise overhead */
> +       start = rte_rdtsc_precise();
> +       end = rte_rdtsc_precise();
> +       tsc_latency = (end - start);
> 
> -       start = rte_get_timer_cycles();
> +       init_fun();
> 
> -       for (i = 0; i < ITERATIONS; i++)
> +       latency = 0;
> +       for (i = 0; i < ITERATIONS; i++) {
> +               start = rte_rdtsc_precise();
>                  update_fun();
> +               end = rte_rdtsc_precise();
> +               latency += (end - start) - tsc_latency;
> +       }
> 
> -       end = rte_get_timer_cycles();
> -
> -       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
> -
> -       return latency;
> +       return latency / (double)ITERATIONS;
>   }
> 
>   static int
> @@ -135,10 +140,9 @@ test_lcore_var_access(void)
>          sarray_latency = benchmark_access_method(sarray_init, sarray_update);
>          lvar_latency = benchmark_access_method(lvar_init, lvar_update);
> 
> -       printf("Latencies [ns/update]\n");
> +       printf("Latencies [cycles/update]\n");
>          printf("Thread-local storage  Static array  Lcore variables\n");
> -       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> -              sarray_latency * 1e9, lvar_latency * 1e9);
> +       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
> lvar_latency);
> 
>          return TEST_SUCCESS;
>   }
> 
> ARM N2 core with patch 1(aka current scheme)
> -----------------------------------
> 
>   + ------------------------------------------------------- +
>   + Test Suite : lcore variable perf autotest
>   + ------------------------------------------------------- +
> Latencies [cycles/update]
> Thread-local storage  Static array  Lcore variables
>                   7.0           7.0              7.0
> 
> 
> ARM N2 core with patch 2
> -----------------------------------
> 
>   + ------------------------------------------------------- +
>   + Test Suite : lcore variable perf autotest
>   + ------------------------------------------------------- +
> Latencies [cycles/update]
> Thread-local storage  Static array  Lcore variables
>                  11.4          15.5             15.5
> 
> x86 i9 core with patch 1(aka current scheme)
> ------------------------------------------------------------
> 
>   + ------------------------------------------------------- +
>   + Test Suite : lcore variable perf autotest
>   + ------------------------------------------------------- +
> Latencies [ns/update]
> Thread-local storage  Static array  Lcore variables
>                   5.0           6.0              6.0
> 
> x86 i9 core with patch 2
> --------------------------------
>   + ------------------------------------------------------- +
>   + Test Suite : lcore variable perf autotest
>   + ------------------------------------------------------- +
> Latencies [cycles/update]
> Thread-local storage  Static array  Lcore variables
>                   5.3          10.6             11.7
> 
> 
> 
> 
> 
>>
>>> Furthermore, you may consider replacing rte_random() in fast path to
>>> running number or so if it is not deterministic in cycle computation.
>>
>> rte_rand() is not used in the fast path. I don't understand what you
> 
> I missed that. Ignore this comment.
> 
>> mean by "running number".

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 0/7]  Lcore variables
  2024-09-12  8:44                                 ` [PATCH v3 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-16 10:52                                   ` Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                                       ` (6 more replies)
  0 siblings, 7 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

This patch set introduces a new API <rte_lcore_var.h> for static
per-lcore id data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

Mattias Rönnblom (7):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable functional tests
  eal: add lcore variable performance test
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 MAINTAINERS                            |   6 +
 app/test/meson.build                   |   2 +
 app/test/test_lcore_var.c              | 432 +++++++++++++++++++++++++
 app/test/test_lcore_var_perf.c         | 244 ++++++++++++++
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  78 +++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/common/rte_random.c            |  28 +-
 lib/eal/common/rte_service.c           | 115 ++++---
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 385 ++++++++++++++++++++++
 lib/eal/version.map                    |   2 +
 lib/eal/x86/rte_power_intrinsics.c     |  17 +-
 lib/power/rte_power_pmd_mgmt.c         |  34 +-
 16 files changed, 1274 insertions(+), 87 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 app/test/test_lcore_var_perf.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
@ 2024-09-16 10:52                                     ` Mattias Rönnblom
  2024-09-16 14:02                                       ` Konstantin Ananyev
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 2/7] eal: add lcore variable functional tests Mattias Rönnblom
                                                       ` (5 subsequent siblings)
  6 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v2:
 * Add Windows support. (Morten Brørup)
 * Fix lcore variables API index reference. (Morten Brørup)
 * Various improvements of the API documentation. (Morten Brørup)
 * Elimination of unused symbol in version.map. (Morten Brørup)

PATCH:
 * Update MAINTAINERS and release notes.
 * Stop covering included files in extern "C" {}.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.
---
 MAINTAINERS                            |   6 +
 config/rte_config.h                    |   1 +
 doc/api/doxy-api-index.md              |   1 +
 doc/guides/rel_notes/release_24_11.rst |  14 +
 lib/eal/common/eal_common_lcore_var.c  |  78 +++++
 lib/eal/common/meson.build             |   1 +
 lib/eal/include/meson.build            |   1 +
 lib/eal/include/rte_lcore_var.h        | 385 +++++++++++++++++++++++++
 lib/eal/version.map                    |   2 +
 9 files changed, 489 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c5a703b5c0..362d9a3f28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
 F: lib/eal/common/rte_random.c
 F: app/test/test_rand_perf.c
 
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
 ARM v7
 M: Wathsala Vithanage <wathsala.vithanage@arm.com>
 F: config/arm/
diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f9f0300126..ed577f14ee 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore variables](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 0ff70d9057..a3884f7491 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -55,6 +55,20 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added EAL per-lcore static memory allocation facility.**
+
+    Added EAL API <rte_lcore_var.h> for statically allocating small,
+    frequently-accessed data structures, for which one instance should
+    exist for each EAL thread and registered non-EAL thread.
+
+    With lcore variables, data is organized spatially on a per-lcore id
+    basis, rather than per library or PMD, avoiding the need for cache
+    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+    reduces CPU cache internal fragmentation, improving performance.
+
+    Lcore variables are similar to thread-local storage (TLS, e.g.,
+    C11 _Thread_local), but decoupling the values' life time from that
+    of the threads.
 
 Removed Items
 -------------
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..309822039b
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+#include <malloc.h>
+#endif
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+#ifdef RTE_EXEC_ENV_WINDOWS
+		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
+					       RTE_CACHE_LINE_SIZE);
+#else
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+#endif
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..ec3ab714a8
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Lcore variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * instance for each current and future lcore id-equipped thread, with
+ * a total of RTE_MAX_LCORE instances. The value of an lcore variable
+ * for a particular lcore id is independent from other values (for
+ * other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handle type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define an lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by two different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of an lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always be the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this requires
+ * sizing data structures (e.g., using `__rte_cache_aligned`) to an
+ * even number of cache lines to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables have the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which makes use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must be initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define an lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handle, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable is only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handle pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for an lcore variable.
+ *
+ * @param value
+ *   A pointer successively set to point to lcore variable value
+ *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for an lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The variable's handle, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index e3ff412683..0c80bf7331 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,8 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 2/7] eal: add lcore variable functional tests
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-16 10:52                                     ` Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 3/7] eal: add lcore variable performance test Mattias Rönnblom
                                                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add functional test suite to exercise the <rte_lcore_var.h> API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 433 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..48279522f0 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..e07d13460f
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,432 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 3/7] eal: add lcore variable performance test
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 2/7] eal: add lcore variable functional tests Mattias Rönnblom
@ 2024-09-16 10:52                                     ` Mattias Rönnblom
  2024-09-16 11:13                                       ` Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
                                                       ` (3 subsequent siblings)
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add basic micro benchmark for lcore variables, in an attempt to assure
that the overhead isn't significantly greater than alternative
approaches, in scenarios where the benefits aren't expected to show up
(i.e., when plenty of cache is available compared to the working set
size of the per-lcore data).

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

--

PATCH v4:
 * Rework the tests to be a little less unrealistic. Instead of a
   single dummy module using a single variable, use a number of
   variables/modules. In this way, differences in cache effects may
   show up.
 * Add RTE_CACHE_GUARD to better mimic that static array pattern.
   (Morten Brørup)
 * Show latencies as TSC cycles. (Morten Brørup)
---
 app/test/meson.build           |   1 +
 app/test/test_lcore_var_perf.c | 244 +++++++++++++++++++++++++++++++++
 2 files changed, 245 insertions(+)
 create mode 100644 app/test/test_lcore_var_perf.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 48279522f0..d4e0c59900 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -104,6 +104,7 @@ source_file_deps = {
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
     'test_lcore_var.c': [],
+    'test_lcore_var_perf.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
new file mode 100644
index 0000000000..8b0abc771c
--- /dev/null
+++ b/app/test/test_lcore_var_perf.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#define MAX_MODS 1024
+
+#include <stdio.h>
+
+#include <rte_bitops.h>
+#include <rte_cycles.h>
+#include <rte_lcore_var.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+struct mod_lcore_state {
+	uint64_t a;
+	uint64_t b;
+	uint64_t sum;
+};
+
+static void
+mod_init(struct mod_lcore_state *state)
+{
+	state->a = rte_rand();
+	state->b = rte_rand();
+	state->sum = 0;
+}
+
+static __rte_always_inline void
+mod_update(volatile struct mod_lcore_state *state)
+{
+	state->sum += state->a * state->b;
+}
+
+struct __rte_cache_aligned mod_lcore_state_aligned {
+	struct mod_lcore_state mod_state;
+
+	RTE_CACHE_GUARD;
+};
+
+static struct mod_lcore_state_aligned
+sarray_lcore_state[MAX_MODS][RTE_MAX_LCORE];
+
+static void
+sarray_init(void)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		struct mod_lcore_state *mod_state =
+			&sarray_lcore_state[mod][lcore_id].mod_state;
+
+		mod_init(mod_state);
+	}
+}
+
+static __rte_noinline void
+sarray_update(unsigned int mod)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	struct mod_lcore_state *mod_state =
+		&sarray_lcore_state[mod][lcore_id].mod_state;
+
+	mod_update(mod_state);
+}
+
+struct mod_lcore_state_lazy {
+	struct mod_lcore_state mod_state;
+	bool initialized;
+};
+
+/*
+ * Note: it's usually a bad idea have this much thread-local storage
+ * allocated in a real application, since it will incur a cost on
+ * thread creation and non-lcore thread memory usage.
+ */
+static RTE_DEFINE_PER_LCORE(struct mod_lcore_state_lazy,
+			    tls_lcore_state)[MAX_MODS];
+
+static inline void
+tls_init(struct mod_lcore_state_lazy *state)
+{
+	mod_init(&state->mod_state);
+
+	state->initialized = true;
+}
+
+static __rte_noinline void
+tls_update(unsigned int mod)
+{
+	struct mod_lcore_state_lazy *state =
+		&RTE_PER_LCORE(tls_lcore_state[mod]);
+
+	/* With thread-local storage, initialization must usually be lazy */
+	if (!state->initialized)
+		tls_init(state);
+
+	mod_update(&state->mod_state);
+}
+
+RTE_LCORE_VAR_HANDLE(struct mod_lcore_state, lvar_lcore_state)[MAX_MODS];
+
+static void
+lvar_init(void)
+{
+	unsigned int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		RTE_LCORE_VAR_ALLOC(lvar_lcore_state[mod]);
+
+		struct mod_lcore_state *state =
+			RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+		mod_init(state);
+	}
+}
+
+static __rte_noinline void
+lvar_update(unsigned int mod)
+{
+	struct mod_lcore_state *state =
+		RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+	mod_update(state);
+}
+
+static void
+shuffle(unsigned int *elems, size_t len)
+{
+	size_t i;
+
+	for (i = len - 1; i > 0; i--) {
+		unsigned int other = rte_rand_max(i + 1);
+
+		unsigned int tmp = elems[other];
+		elems[other] = elems[i];
+		elems[i] = tmp;
+	}
+}
+
+#define ITERATIONS UINT64_C(10000000)
+
+static inline double
+benchmark_access(const unsigned int *mods, unsigned int num_mods,
+		 void (*init_fun)(void), void (*update_fun)(unsigned int))
+{
+	unsigned int i;
+	double start;
+	double end;
+	double latency;
+	unsigned int num_mods_mask = num_mods - 1;
+
+	RTE_VERIFY(rte_is_power_of_2(num_mods));
+
+	if (init_fun != NULL)
+		init_fun();
+
+	/* Warm up cache and make sure TLS variables are initialized */
+	for (i = 0; i < num_mods; i++)
+		update_fun(i);
+
+	start = rte_rdtsc();
+
+	for (i = 0; i < ITERATIONS; i++)
+		update_fun(mods[i & num_mods_mask]);
+
+	end = rte_rdtsc();
+
+	latency = (end - start) / ITERATIONS;
+
+	return latency;
+}
+
+static void
+test_lcore_var_access_n(unsigned int num_mods)
+{
+	double sarray_latency;
+	double tls_latency;
+	double lvar_latency;
+	unsigned int mods[num_mods];
+	unsigned int i;
+
+	for (i = 0; i < num_mods; i++)
+		mods[i] = i;
+
+	shuffle(mods, num_mods);
+
+	sarray_latency =
+		benchmark_access(mods, num_mods, sarray_init, sarray_update);
+
+	tls_latency =
+		benchmark_access(mods, num_mods, NULL, tls_update);
+
+	lvar_latency =
+		benchmark_access(mods, num_mods, lvar_init, lvar_update);
+
+	printf("%17u %13.1f %13.1f %16.1f\n", num_mods, sarray_latency,
+	       tls_latency, lvar_latency);
+}
+
+/*
+ * The potential performance benefit of lcore variables compared to
+ * the use of statically sized, lcore id-indexed arrays are not
+ * shorter latencies in a scenario with low cache pressure, but rather
+ * fewer cache misses in a real-world scenario, with extensive cache
+ * usage. These tests are a crude simulation of such, using <N> dummy
+ * modules, each wiht a small, per-lcore state. Note however that
+ * these tests has very little non-lcore/thread local state, which is
+ * unrealistic.
+ */
+
+static int
+test_lcore_var_access(void)
+{
+	unsigned int num_mods = 1;
+
+	printf("Latencies [TSC cycles/update]\n");
+	printf("Modules/Variables  Static array  Thread-local Storage  "
+	       "Lcore variables\n");
+
+	for (num_mods = 1; num_mods <= MAX_MODS; num_mods *= 2)
+		test_lcore_var_access_n(num_mods);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable perf autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_lcore_var_access),
+		TEST_CASES_END()
+	},
+};
+
+static int
+test_lcore_var_perf(void)
+{
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_PERF_TEST(lcore_var_perf_autotest, test_lcore_var_perf);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 4/7] random: keep PRNG state in lcore variable
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
                                                       ` (2 preceding siblings ...)
  2024-09-16 10:52                                     ` [PATCH v4 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-16 10:52                                     ` Mattias Rönnblom
  2024-09-16 16:11                                       ` Konstantin Ananyev
  2024-09-16 10:52                                     ` [PATCH v4 5/7] power: keep per-lcore " Mattias Rönnblom
                                                       ` (2 subsequent siblings)
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 5/7] power: keep per-lcore state in lcore variable
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
                                                       ` (3 preceding siblings ...)
  2024-09-16 10:52                                     ` [PATCH v4 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-16 10:52                                     ` Mattias Rönnblom
  2024-09-16 16:12                                       ` Konstantin Ananyev
  2024-09-16 10:52                                     ` [PATCH v4 6/7] service: " Mattias Rönnblom
  2024-09-16 10:52                                     ` [PATCH v4 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v3:
 * Replace for loop with FOREACH macro.
---
 lib/power/rte_power_pmd_mgmt.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a5139dd4f7 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 6/7] service: keep per-lcore state in lcore variable
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
                                                       ` (4 preceding siblings ...)
  2024-09-16 10:52                                     ` [PATCH v4 5/7] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-16 10:52                                     ` Mattias Rönnblom
  2024-09-16 16:13                                       ` Konstantin Ananyev
  2024-09-16 10:52                                     ` [PATCH v4 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 115 +++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..03379f1588 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +449,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +462,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +484,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +530,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +546,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +567,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +584,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +636,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +688,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +706,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +731,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +755,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +779,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +809,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +818,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +879,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +895,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +942,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +971,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +983,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1022,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v4 7/7] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
                                                       ` (5 preceding siblings ...)
  2024-09-16 10:52                                     ` [PATCH v4 6/7] service: " Mattias Rönnblom
@ 2024-09-16 10:52                                     ` Mattias Rönnblom
  2024-09-16 16:14                                       ` Konstantin Ananyev
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 10:52 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v4 3/7] eal: add lcore variable performance test
  2024-09-16 10:52                                     ` [PATCH v4 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-16 11:13                                       ` Mattias Rönnblom
  2024-09-16 11:54                                         ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 11:13 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob

On 2024-09-16 12:52, Mattias Rönnblom wrote:
> Add basic micro benchmark for lcore variables, in an attempt to assure
> that the overhead isn't significantly greater than alternative
> approaches, in scenarios where the benefits aren't expected to show up
> (i.e., when plenty of cache is available compared to the working set
> size of the per-lcore data).
> 

Here are some test results for a Raptor Cove @ 3,2 GHz (GCC 11):

  + ------------------------------------------------------- +
  + Test Suite : lcore variable perf autotest
  + ------------------------------------------------------- +
Latencies [TSC cycles/update]
Modules/Variables  Static array  Thread-local Storage  Lcore variables
                 1           3.9           5.5              3.7
                 2           3.8           5.5              3.8
                 4           4.9           5.5              3.7
                 8           3.8           5.5              3.8
                16          11.3           5.5              3.7
                32          20.9           5.5              3.7
                64          23.5           5.5              3.7
               128          23.2           5.5              3.7
               256          23.5           5.5              3.7
               512          24.1           5.5              3.7
              1024          25.3           5.5              3.9
  + TestCase [ 0] : test_lcore_var_access succeeded
  + ------------------------------------------------------- +


The reason for TLS being slower than lcore variables (which in turn 
relies on TLS for lcore id lookup) is the lazy initialization 
conditional that is imposed on variant. Could that be avoided (which is 
module-dependent I suppose), it beats lcore variables at ~3.0 cycles/update.

I must say I'm surprised to see lcore variables doing this good, at 
these very modest working set sizes. Probably, you can stay at near-zero 
L1 misses with lcore variables (and TLS), but start missing the L1 with 
static arrays.

> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> 
> --
> 
> PATCH v4:
>   * Rework the tests to be a little less unrealistic. Instead of a
>     single dummy module using a single variable, use a number of
>     variables/modules. In this way, differences in cache effects may
>     show up.
>   * Add RTE_CACHE_GUARD to better mimic that static array pattern.
>     (Morten Brørup)
>   * Show latencies as TSC cycles. (Morten Brørup)
> ---
>   app/test/meson.build           |   1 +
>   app/test/test_lcore_var_perf.c | 244 +++++++++++++++++++++++++++++++++
>   2 files changed, 245 insertions(+)
>   create mode 100644 app/test/test_lcore_var_perf.c
> 
> diff --git a/app/test/meson.build b/app/test/meson.build
> index 48279522f0..d4e0c59900 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -104,6 +104,7 @@ source_file_deps = {
>       'test_kvargs.c': ['kvargs'],
>       'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
>       'test_lcore_var.c': [],
> +    'test_lcore_var_perf.c': [],
>       'test_lcores.c': [],
>       'test_link_bonding.c': ['ethdev', 'net_bond',
>           'net'] + packet_burst_generator_deps + virtual_pmd_deps,
> diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
> new file mode 100644
> index 0000000000..8b0abc771c
> --- /dev/null
> +++ b/app/test/test_lcore_var_perf.c
> @@ -0,0 +1,244 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#define MAX_MODS 1024
> +
> +#include <stdio.h>
> +
> +#include <rte_bitops.h>
> +#include <rte_cycles.h>
> +#include <rte_lcore_var.h>
> +#include <rte_per_lcore.h>
> +#include <rte_random.h>
> +
> +#include "test.h"
> +
> +struct mod_lcore_state {
> +	uint64_t a;
> +	uint64_t b;
> +	uint64_t sum;
> +};
> +
> +static void
> +mod_init(struct mod_lcore_state *state)
> +{
> +	state->a = rte_rand();
> +	state->b = rte_rand();
> +	state->sum = 0;
> +}
> +
> +static __rte_always_inline void
> +mod_update(volatile struct mod_lcore_state *state)
> +{
> +	state->sum += state->a * state->b;
> +}
> +
> +struct __rte_cache_aligned mod_lcore_state_aligned {
> +	struct mod_lcore_state mod_state;
> +
> +	RTE_CACHE_GUARD;
> +};
> +
> +static struct mod_lcore_state_aligned
> +sarray_lcore_state[MAX_MODS][RTE_MAX_LCORE];
> +
> +static void
> +sarray_init(void)
> +{
> +	unsigned int lcore_id = rte_lcore_id();
> +	int mod;
> +
> +	for (mod = 0; mod < MAX_MODS; mod++) {
> +		struct mod_lcore_state *mod_state =
> +			&sarray_lcore_state[mod][lcore_id].mod_state;
> +
> +		mod_init(mod_state);
> +	}
> +}
> +
> +static __rte_noinline void
> +sarray_update(unsigned int mod)
> +{
> +	unsigned int lcore_id = rte_lcore_id();
> +	struct mod_lcore_state *mod_state =
> +		&sarray_lcore_state[mod][lcore_id].mod_state;
> +
> +	mod_update(mod_state);
> +}
> +
> +struct mod_lcore_state_lazy {
> +	struct mod_lcore_state mod_state;
> +	bool initialized;
> +};
> +
> +/*
> + * Note: it's usually a bad idea have this much thread-local storage
> + * allocated in a real application, since it will incur a cost on
> + * thread creation and non-lcore thread memory usage.
> + */
> +static RTE_DEFINE_PER_LCORE(struct mod_lcore_state_lazy,
> +			    tls_lcore_state)[MAX_MODS];
> +
> +static inline void
> +tls_init(struct mod_lcore_state_lazy *state)
> +{
> +	mod_init(&state->mod_state);
> +
> +	state->initialized = true;
> +}
> +
> +static __rte_noinline void
> +tls_update(unsigned int mod)
> +{
> +	struct mod_lcore_state_lazy *state =
> +		&RTE_PER_LCORE(tls_lcore_state[mod]);
> +
> +	/* With thread-local storage, initialization must usually be lazy */
> +	if (!state->initialized)
> +		tls_init(state);
> +
> +	mod_update(&state->mod_state);
> +}
> +
> +RTE_LCORE_VAR_HANDLE(struct mod_lcore_state, lvar_lcore_state)[MAX_MODS];
> +
> +static void
> +lvar_init(void)
> +{
> +	unsigned int mod;
> +
> +	for (mod = 0; mod < MAX_MODS; mod++) {
> +		RTE_LCORE_VAR_ALLOC(lvar_lcore_state[mod]);
> +
> +		struct mod_lcore_state *state =
> +			RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
> +
> +		mod_init(state);
> +	}
> +}
> +
> +static __rte_noinline void
> +lvar_update(unsigned int mod)
> +{
> +	struct mod_lcore_state *state =
> +		RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
> +
> +	mod_update(state);
> +}
> +
> +static void
> +shuffle(unsigned int *elems, size_t len)
> +{
> +	size_t i;
> +
> +	for (i = len - 1; i > 0; i--) {
> +		unsigned int other = rte_rand_max(i + 1);
> +
> +		unsigned int tmp = elems[other];
> +		elems[other] = elems[i];
> +		elems[i] = tmp;
> +	}
> +}
> +
> +#define ITERATIONS UINT64_C(10000000)
> +
> +static inline double
> +benchmark_access(const unsigned int *mods, unsigned int num_mods,
> +		 void (*init_fun)(void), void (*update_fun)(unsigned int))
> +{
> +	unsigned int i;
> +	double start;
> +	double end;
> +	double latency;
> +	unsigned int num_mods_mask = num_mods - 1;
> +
> +	RTE_VERIFY(rte_is_power_of_2(num_mods));
> +
> +	if (init_fun != NULL)
> +		init_fun();
> +
> +	/* Warm up cache and make sure TLS variables are initialized */
> +	for (i = 0; i < num_mods; i++)
> +		update_fun(i);
> +
> +	start = rte_rdtsc();
> +
> +	for (i = 0; i < ITERATIONS; i++)
> +		update_fun(mods[i & num_mods_mask]);
> +
> +	end = rte_rdtsc();
> +
> +	latency = (end - start) / ITERATIONS;
> +
> +	return latency;
> +}
> +
> +static void
> +test_lcore_var_access_n(unsigned int num_mods)
> +{
> +	double sarray_latency;
> +	double tls_latency;
> +	double lvar_latency;
> +	unsigned int mods[num_mods];
> +	unsigned int i;
> +
> +	for (i = 0; i < num_mods; i++)
> +		mods[i] = i;
> +
> +	shuffle(mods, num_mods);
> +
> +	sarray_latency =
> +		benchmark_access(mods, num_mods, sarray_init, sarray_update);
> +
> +	tls_latency =
> +		benchmark_access(mods, num_mods, NULL, tls_update);
> +
> +	lvar_latency =
> +		benchmark_access(mods, num_mods, lvar_init, lvar_update);
> +
> +	printf("%17u %13.1f %13.1f %16.1f\n", num_mods, sarray_latency,
> +	       tls_latency, lvar_latency);
> +}
> +
> +/*
> + * The potential performance benefit of lcore variables compared to
> + * the use of statically sized, lcore id-indexed arrays are not
> + * shorter latencies in a scenario with low cache pressure, but rather
> + * fewer cache misses in a real-world scenario, with extensive cache
> + * usage. These tests are a crude simulation of such, using <N> dummy
> + * modules, each wiht a small, per-lcore state. Note however that
> + * these tests has very little non-lcore/thread local state, which is
> + * unrealistic.
> + */
> +
> +static int
> +test_lcore_var_access(void)
> +{
> +	unsigned int num_mods = 1;
> +
> +	printf("Latencies [TSC cycles/update]\n");
> +	printf("Modules/Variables  Static array  Thread-local Storage  "
> +	       "Lcore variables\n");
> +
> +	for (num_mods = 1; num_mods <= MAX_MODS; num_mods *= 2)
> +		test_lcore_var_access_n(num_mods);
> +
> +	return TEST_SUCCESS;
> +}
> +
> +static struct unit_test_suite lcore_var_testsuite = {
> +	.suite_name = "lcore variable perf autotest",
> +	.unit_test_cases = {
> +		TEST_CASE(test_lcore_var_access),
> +		TEST_CASES_END()
> +	},
> +};
> +
> +static int
> +test_lcore_var_perf(void)
> +{
> +	return unit_test_suite_runner(&lcore_var_testsuite);
> +}
> +
> +REGISTER_PERF_TEST(lcore_var_perf_autotest, test_lcore_var_perf);

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 3/7] eal: add lcore variable performance test
  2024-09-16 11:13                                       ` Mattias Rönnblom
@ 2024-09-16 11:54                                         ` Morten Brørup
  2024-09-16 16:12                                           ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-16 11:54 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand, Jerin Jacob

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Monday, 16 September 2024 13.13
> 
> On 2024-09-16 12:52, Mattias Rönnblom wrote:
> > Add basic micro benchmark for lcore variables, in an attempt to assure
> > that the overhead isn't significantly greater than alternative
> > approaches, in scenarios where the benefits aren't expected to show up
> > (i.e., when plenty of cache is available compared to the working set
> > size of the per-lcore data).
> >
> 
> Here are some test results for a Raptor Cove @ 3,2 GHz (GCC 11):
> 
>   + ------------------------------------------------------- +
>   + Test Suite : lcore variable perf autotest
>   + ------------------------------------------------------- +
> Latencies [TSC cycles/update]
> Modules/Variables  Static array  Thread-local Storage  Lcore variables
>                  1           3.9           5.5              3.7
>                  2           3.8           5.5              3.8
>                  4           4.9           5.5              3.7
>                  8           3.8           5.5              3.8
>                 16          11.3           5.5              3.7
>                 32          20.9           5.5              3.7
>                 64          23.5           5.5              3.7
>                128          23.2           5.5              3.7
>                256          23.5           5.5              3.7
>                512          24.1           5.5              3.7
>               1024          25.3           5.5              3.9
>   + TestCase [ 0] : test_lcore_var_access succeeded
>   + ------------------------------------------------------- +
> 
> 
> The reason for TLS being slower than lcore variables (which in turn
> relies on TLS for lcore id lookup) is the lazy initialization
> conditional that is imposed on variant. Could that be avoided (which is
> module-dependent I suppose), it beats lcore variables at ~3.0 cycles/update.

I think you should not assume lazy initialization of TLS in your benchmark.
Our application uses TLS, and when spinning up a new thread, we call an per-lcore init function of each module before calling the per-lcore run function. This design pattern is also described in Figure 1.4 [1] in the Programmer's Guide.

[1]: https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html

> 
> I must say I'm surprised to see lcore variables doing this good, at
> these very modest working set sizes. Probably, you can stay at near-zero
> L1 misses with lcore variables (and TLS), but start missing the L1 with
> static arrays.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-16 10:52                                     ` [PATCH v4 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-16 14:02                                       ` Konstantin Ananyev
  2024-09-16 17:39                                         ` Morten Brørup
  2024-09-17 14:28                                         ` Mattias Rönnblom
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
  1 sibling, 2 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-16 14:02 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob



> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small, frequently-accessed data structures, for which one instance
> should exist for each lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>

LGTM in general, few small questions (mostly nits), see below. 
 
> --- /dev/null
> +++ b/lib/eal/common/eal_common_lcore_var.c
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#include <inttypes.h>
> +#include <stdlib.h>
> +
> +#ifdef RTE_EXEC_ENV_WINDOWS
> +#include <malloc.h>
> +#endif
> +
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +
> +#include <rte_lcore_var.h>
> +
> +#include "eal_private.h"
> +
> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> +
> +static void *lcore_buffer;
> +static size_t offset = RTE_MAX_LCORE_VAR;
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> +	void *handle;
> +	void *value;
> +
> +	offset = RTE_ALIGN_CEIL(offset, align);
> +
> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> +#ifdef RTE_EXEC_ENV_WINDOWS
> +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> +					       RTE_CACHE_LINE_SIZE);
> +#else
> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> +					     LCORE_BUFFER_SIZE);
> +#endif

Don't remember did that question already arise or not:
For debugging and health-checking purposes - would it make sense to link all
lcore_buffer values into a linked list?
So user/developer/some tool can walk over it to check that provided handle value
is really a valid lcore_var, etc.

> +		RTE_VERIFY(lcore_buffer != NULL);
> +
> +		offset = 0;
> +	}
> +
> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> +
> +	offset += size;
> +
> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> +		memset(value, 0, size);
> +
> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> +		"%"PRIuPTR"-byte alignment", size, align);
> +
> +	return handle;
> +}
> +
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align)
> +{
> +	/* Having the per-lcore buffer size aligned on cache lines
> +	 * assures as well as having the base pointer aligned on cache
> +	 * size assures that aligned offsets also translate to alipgned
> +	 * pointers across all values.
> +	 */
> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> +	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
> +
> +	/* '0' means asking for worst-case alignment requirements */
> +	if (align == 0)
> +		align = alignof(max_align_t);
> +
> +	RTE_ASSERT(rte_is_power_of_2(align));
> +
> +	return lcore_var_alloc(size, align);
> +}

....

> diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
> new file mode 100644
> index 0000000000..ec3ab714a8
> --- /dev/null
> +++ b/lib/eal/include/rte_lcore_var.h

... 

> +/**
> + * Given the lcore variable type, produces the type of the lcore
> + * variable handle.
> + */
> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
> +	type *
> +
> +/**
> + * Define an lcore variable handle.
> + *
> + * This macro defines a variable which is used as a handle to access
> + * the various instances of a per-lcore id variable.
> + *
> + * The aim with this macro is to make clear at the point of
> + * declaration that this is an lcore handle, rather than a regular
> + * pointer.
> + *
> + * Add @b static as a prefix in case the lcore variable is only to be
> + * accessed from a particular translation unit.
> + */
> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
> +	handle = rte_lcore_var_alloc(size, align)
> +
> +/**
> + * Allocate space for an lcore variable, and initialize its handle,
> + * with values aligned for any type of object.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
> +
> +/**
> + * Allocate space for an lcore variable of the size and alignment requirements
> + * suggested by the handle pointer type, and initialize its handle.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_ALLOC(handle)					\
> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
> +				       alignof(typeof(*(handle))))
> +
> +/**
> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
> + * means of a @ref RTE_INIT constructor.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> +	{								\
> +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
> +	}
> +
> +/**
> + * Allocate an explicitly-sized lcore variable by means of a @ref
> + * RTE_INIT constructor.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
> +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
> +
> +/**
> + * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
> + *
> + * The values of the lcore variable are initialized to zero.
> + */
> +#define RTE_LCORE_VAR_INIT(name)					\
> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> +	{								\
> +		RTE_LCORE_VAR_ALLOC(name);				\
> +	}
> +
> +/**
> + * Get void pointer to lcore variable instance with the specified
> + * lcore id.
> + *
> + * @param lcore_id
> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> + *   instances should be accessed. The lcore id need not be valid
> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
> + *   is also not valid (and thus should not be dereferenced).
> + * @param handle
> + *   The lcore variable handle.
> + */
> +static inline void *
> +rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
> +{
> +	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
> +}
> +
> +/**
> + * Get pointer to lcore variable instance with the specified lcore id.
> + *
> + * @param lcore_id
> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> + *   instances should be accessed. The lcore id need not be valid
> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
> + *   is also not valid (and thus should not be dereferenced).
> + * @param handle
> + *   The lcore variable handle.
> + */
> +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
> +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
> +
> +/**
> + * Get pointer to lcore variable instance of the current thread.
> + *
> + * May only be used by EAL threads and registered non-EAL threads.
> + */
> +#define RTE_LCORE_VAR_VALUE(handle) \
> +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)

Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
After all if people do not want this extra check, they can probably use
RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
explicitly.

> +
> +/**
> + * Iterate over each lcore id's value for an lcore variable.
> + *
> + * @param value
> + *   A pointer successively set to point to lcore variable value
> + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
> + * @param handle
> + *   The lcore variable handle.
> + */
> +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
> +	for (unsigned int lcore_id =					\
> +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
> +	     lcore_id < RTE_MAX_LCORE;					\
> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))

Might be a bit better (and safer) to make lcore_id a macro parameter?
I.E.:
define RTE_LCORE_VAR_FOREACH_VALUE(value, handle, lcore_id) \
for ((lcore_id) = ... 


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 4/7] random: keep PRNG state in lcore variable
  2024-09-16 10:52                                     ` [PATCH v4 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-16 16:11                                       ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-16 16:11 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob



> -----Original Message-----
> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Sent: Monday, September 16, 2024 11:52 AM
> To: dev@dpdk.org
> Cc: hofors@lysator.liu.se; Morten Brørup <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>;
> Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>; David Marchand <david.marchand@redhat.com>; Jerin Jacob
> <jerinj@marvell.com>; Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Subject: [PATCH v4 4/7] random: keep PRNG state in lcore variable
> 
> Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
> cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
> same state in a more cache-friendly lcore variable.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> --
> 
> RFC v3:
>  * Remove cache alignment on unregistered threads' rte_rand_state.
>    (Morten Brørup)
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> 

> 2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 5/7] power: keep per-lcore state in lcore variable
  2024-09-16 10:52                                     ` [PATCH v4 5/7] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-16 16:12                                       ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-16 16:12 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob


> Replace static array of cache-aligned structs with an lcore variable,
> to slightly benefit code simplicity and performance.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> --
> 
> RFC v3:
>  * Replace for loop with FOREACH macro.
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

> 2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v4 3/7] eal: add lcore variable performance test
  2024-09-16 11:54                                         ` Morten Brørup
@ 2024-09-16 16:12                                           ` Mattias Rönnblom
  2024-09-16 17:19                                             ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-16 16:12 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand, Jerin Jacob

On 2024-09-16 13:54, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>> Sent: Monday, 16 September 2024 13.13
>>
>> On 2024-09-16 12:52, Mattias Rönnblom wrote:
>>> Add basic micro benchmark for lcore variables, in an attempt to assure
>>> that the overhead isn't significantly greater than alternative
>>> approaches, in scenarios where the benefits aren't expected to show up
>>> (i.e., when plenty of cache is available compared to the working set
>>> size of the per-lcore data).
>>>
>>
>> Here are some test results for a Raptor Cove @ 3,2 GHz (GCC 11):
>>
>>    + ------------------------------------------------------- +
>>    + Test Suite : lcore variable perf autotest
>>    + ------------------------------------------------------- +
>> Latencies [TSC cycles/update]
>> Modules/Variables  Static array  Thread-local Storage  Lcore variables
>>                   1           3.9           5.5              3.7
>>                   2           3.8           5.5              3.8
>>                   4           4.9           5.5              3.7
>>                   8           3.8           5.5              3.8
>>                  16          11.3           5.5              3.7
>>                  32          20.9           5.5              3.7
>>                  64          23.5           5.5              3.7
>>                 128          23.2           5.5              3.7
>>                 256          23.5           5.5              3.7
>>                 512          24.1           5.5              3.7
>>                1024          25.3           5.5              3.9
>>    + TestCase [ 0] : test_lcore_var_access succeeded
>>    + ------------------------------------------------------- +
>>
>>
>> The reason for TLS being slower than lcore variables (which in turn
>> relies on TLS for lcore id lookup) is the lazy initialization
>> conditional that is imposed on variant. Could that be avoided (which is
>> module-dependent I suppose), it beats lcore variables at ~3.0 cycles/update.
> 
> I think you should not assume lazy initialization of TLS in your benchmark.
> Our application uses TLS, and when spinning up a new thread, we call an per-lcore init function of each module before calling the per-lcore run function. This design pattern is also described in Figure 1.4 [1] in the Programmer's Guide.
> 
> [1]: https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html
> 

Per-lcore init functions may be an option, and also may not, depending 
on what API you need to adhere to. But maybe I should add non-lazy TLS 
variant as well.

I should probably add some information on lcore variables in the EAL 
programmer's guide as well.

Non-lazy TLS would be a more viable option if there were proper 
framework support for it. Now, I'm not sure there is a better way to do 
it in a DPDK library than how it's done for tracing, where there's an 
explicit call per thread created. Other DPDK-internal users of 
RTE_PER_LCORE seems to depend on lazy initialization.

>>
>> I must say I'm surprised to see lcore variables doing this good, at
>> these very modest working set sizes. Probably, you can stay at near-zero
>> L1 misses with lcore variables (and TLS), but start missing the L1 with
>> static arrays.
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 6/7] service: keep per-lcore state in lcore variable
  2024-09-16 10:52                                     ` [PATCH v4 6/7] service: " Mattias Rönnblom
@ 2024-09-16 16:13                                       ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-16 16:13 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob


> Replace static array of cache-aligned structs with an lcore variable,
> to slightly benefit code simplicity and performance.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> 

> 2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 7/7] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-16 10:52                                     ` [PATCH v4 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
@ 2024-09-16 16:14                                       ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-16 16:14 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob


> Keep per-lcore power intrinsics state in a lcore variable to reduce
> cache working set size and avoid any CPU next-line-prefetching causing
> false sharing.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

> 2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 3/7] eal: add lcore variable performance test
  2024-09-16 16:12                                           ` Mattias Rönnblom
@ 2024-09-16 17:19                                             ` Morten Brørup
  0 siblings, 0 replies; 185+ messages in thread
From: Morten Brørup @ 2024-09-16 17:19 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand, Jerin Jacob

> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Monday, 16 September 2024 18.13
> 
> On 2024-09-16 13:54, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >> Sent: Monday, 16 September 2024 13.13
> >>
> >> The reason for TLS being slower than lcore variables (which in turn
> >> relies on TLS for lcore id lookup) is the lazy initialization
> >> conditional that is imposed on variant. Could that be avoided (which
> is
> >> module-dependent I suppose), it beats lcore variables at ~3.0
> cycles/update.
> >
> > I think you should not assume lazy initialization of TLS in your
> benchmark.
> > Our application uses TLS, and when spinning up a new thread, we call
> an per-lcore init function of each module before calling the per-lcore
> run function. This design pattern is also described in Figure 1.4 [1] in
> the Programmer's Guide.
> >
> > [1]: https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html
> >
> 
> Per-lcore init functions may be an option, and also may not, depending
> on what API you need to adhere to. But maybe I should add non-lazy TLS
> variant as well.

Certainly. Both, or just non-lazy is fine with me.

> 
> I should probably add some information on lcore variables in the EAL
> programmer's guide as well.

+1

> 
> Non-lazy TLS would be a more viable option if there were proper
> framework support for it.

The framework should provide RTE_LCORE_INIT macros for modules to define per-lcore init functions, which EAL should call when EAL creates additional threads. And they should obviously be called from within the newly created thread, not from the main thread.
And if some per-lcore init function only needs to do it work for worker threads, the init function can check the thread type as the first thing.

> Now, I'm not sure there is a better way to do
> it in a DPDK library than how it's done for tracing, where there's an
> explicit call per thread created. Other DPDK-internal users of
> RTE_PER_LCORE seems to depend on lazy initialization.

The framework lacks the per-thread init feature, so it's implemented differently in different modules. Don't get distracted by how the trace module does it. Just imagine the framework offering some generic mechanism to do it.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-16 14:02                                       ` Konstantin Ananyev
@ 2024-09-16 17:39                                         ` Morten Brørup
  2024-09-16 23:19                                           ` Konstantin Ananyev
  2024-09-17 14:28                                         ` Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-16 17:39 UTC (permalink / raw)
  To: Konstantin Ananyev, Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

> From: Konstantin Ananyev [mailto:konstantin.ananyev@huawei.com]
> Sent: Monday, 16 September 2024 16.02
> 
> > Introduce DPDK per-lcore id variables, or lcore variables for short.
> >
> > An lcore variable has one value for every current and future lcore
> > id-equipped thread.
> >
> > The primary <rte_lcore_var.h> use case is for statically allocating
> > small, frequently-accessed data structures, for which one instance
> > should exist for each lcore.
> >
> > Lcore variables are similar to thread-local storage (TLS, e.g., C11
> > _Thread_local), but decoupling the values' life time with that of the
> > threads.
> >
> > Lcore variables are also similar in terms of functionality provided by
> > FreeBSD kernel's DPCPU_*() family of macros and the associated
> > build-time machinery. DPCPU uses linker scripts, which effectively
> > prevents the reuse of its, otherwise seemingly viable, approach.
> >
> > The currently-prevailing way to solve the same problem as lcore
> > variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> > array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> > lcore variables over this approach is that data related to the same
> > lcore now is close (spatially, in memory), rather than data used by
> > the same module, which in turn avoid excessive use of padding,
> > polluting caches with unused data.
> >
> > Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> > Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> LGTM in general, few small questions (mostly nits), see below.
> 
> > --- /dev/null
> > +++ b/lib/eal/common/eal_common_lcore_var.c
> > @@ -0,0 +1,78 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2024 Ericsson AB
> > + */
> > +
> > +#include <inttypes.h>
> > +#include <stdlib.h>
> > +
> > +#ifdef RTE_EXEC_ENV_WINDOWS
> > +#include <malloc.h>
> > +#endif
> > +
> > +#include <rte_common.h>
> > +#include <rte_debug.h>
> > +#include <rte_log.h>
> > +
> > +#include <rte_lcore_var.h>
> > +
> > +#include "eal_private.h"
> > +
> > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> > +
> > +static void *lcore_buffer;
> > +static size_t offset = RTE_MAX_LCORE_VAR;
> > +
> > +static void *
> > +lcore_var_alloc(size_t size, size_t align)
> > +{
> > +	void *handle;
> > +	void *value;
> > +
> > +	offset = RTE_ALIGN_CEIL(offset, align);
> > +
> > +	if (offset + size > RTE_MAX_LCORE_VAR) {
> > +#ifdef RTE_EXEC_ENV_WINDOWS
> > +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> > +					       RTE_CACHE_LINE_SIZE);
> > +#else
> > +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> > +					     LCORE_BUFFER_SIZE);
> > +#endif
> 
> Don't remember did that question already arise or not:
> For debugging and health-checking purposes - would it make sense to link
> all
> lcore_buffer values into a linked list?
> So user/developer/some tool can walk over it to check that provided
> handle value
> is really a valid lcore_var, etc.

Nice idea.
Such a list, along with an accompanying dump function can be added later.

> 
> > +		RTE_VERIFY(lcore_buffer != NULL);
> > +
> > +		offset = 0;
> > +	}
> > +
> > +	handle = RTE_PTR_ADD(lcore_buffer, offset);
> > +
> > +	offset += size;
> > +
> > +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> > +		memset(value, 0, size);
> > +
> > +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with
> a "
> > +		"%"PRIuPTR"-byte alignment", size, align);
> > +
> > +	return handle;
> > +}
> > +
> > +void *
> > +rte_lcore_var_alloc(size_t size, size_t align)
> > +{
> > +	/* Having the per-lcore buffer size aligned on cache lines
> > +	 * assures as well as having the base pointer aligned on cache
> > +	 * size assures that aligned offsets also translate to alipgned
> > +	 * pointers across all values.
> > +	 */
> > +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> > +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> > +	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
> > +
> > +	/* '0' means asking for worst-case alignment requirements */
> > +	if (align == 0)
> > +		align = alignof(max_align_t);
> > +
> > +	RTE_ASSERT(rte_is_power_of_2(align));
> > +
> > +	return lcore_var_alloc(size, align);
> > +}
> 
> ....
> 
> > diff --git a/lib/eal/include/rte_lcore_var.h
> b/lib/eal/include/rte_lcore_var.h
> > new file mode 100644
> > index 0000000000..ec3ab714a8
> > --- /dev/null
> > +++ b/lib/eal/include/rte_lcore_var.h
> 
> ...
> 
> > +/**
> > + * Given the lcore variable type, produces the type of the lcore
> > + * variable handle.
> > + */
> > +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
> > +	type *
> > +
> > +/**
> > + * Define an lcore variable handle.
> > + *
> > + * This macro defines a variable which is used as a handle to access
> > + * the various instances of a per-lcore id variable.
> > + *
> > + * The aim with this macro is to make clear at the point of
> > + * declaration that this is an lcore handle, rather than a regular
> > + * pointer.
> > + *
> > + * Add @b static as a prefix in case the lcore variable is only to be
> > + * accessed from a particular translation unit.
> > + */
> > +#define RTE_LCORE_VAR_HANDLE(type, name)	\
> > +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
> > +
> > +/**
> > + * Allocate space for an lcore variable, and initialize its handle.
> > + *
> > + * The values of the lcore variable are initialized to zero.
> > + */
> > +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
> > +	handle = rte_lcore_var_alloc(size, align)
> > +
> > +/**
> > + * Allocate space for an lcore variable, and initialize its handle,
> > + * with values aligned for any type of object.
> > + *
> > + * The values of the lcore variable are initialized to zero.
> > + */
> > +#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
> > +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
> > +
> > +/**
> > + * Allocate space for an lcore variable of the size and alignment
> requirements
> > + * suggested by the handle pointer type, and initialize its handle.
> > + *
> > + * The values of the lcore variable are initialized to zero.
> > + */
> > +#define RTE_LCORE_VAR_ALLOC(handle)					\
> > +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
> > +				       alignof(typeof(*(handle))))
> > +
> > +/**
> > + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
> > + * means of a @ref RTE_INIT constructor.
> > + *
> > + * The values of the lcore variable are initialized to zero.
> > + */
> > +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
> > +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> > +	{								\
> > +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
> > +	}
> > +
> > +/**
> > + * Allocate an explicitly-sized lcore variable by means of a @ref
> > + * RTE_INIT constructor.
> > + *
> > + * The values of the lcore variable are initialized to zero.
> > + */
> > +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
> > +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
> > +
> > +/**
> > + * Allocate an lcore variable by means of a @ref RTE_INIT
> constructor.
> > + *
> > + * The values of the lcore variable are initialized to zero.
> > + */
> > +#define RTE_LCORE_VAR_INIT(name)					\
> > +	RTE_INIT(rte_lcore_var_init_ ## name)				\
> > +	{								\
> > +		RTE_LCORE_VAR_ALLOC(name);				\
> > +	}
> > +
> > +/**
> > + * Get void pointer to lcore variable instance with the specified
> > + * lcore id.
> > + *
> > + * @param lcore_id
> > + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> > + *   instances should be accessed. The lcore id need not be valid
> > + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the
> pointer
> > + *   is also not valid (and thus should not be dereferenced).
> > + * @param handle
> > + *   The lcore variable handle.
> > + */
> > +static inline void *
> > +rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
> > +{
> > +	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
> > +}
> > +
> > +/**
> > + * Get pointer to lcore variable instance with the specified lcore
> id.
> > + *
> > + * @param lcore_id
> > + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> > + *   instances should be accessed. The lcore id need not be valid
> > + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the
> pointer
> > + *   is also not valid (and thus should not be dereferenced).
> > + * @param handle
> > + *   The lcore variable handle.
> > + */
> > +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
> > +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
> > +
> > +/**
> > + * Get pointer to lcore variable instance of the current thread.
> > + *
> > + * May only be used by EAL threads and registered non-EAL threads.
> > + */
> > +#define RTE_LCORE_VAR_VALUE(handle) \
> > +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> 
> Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
> After all if people do not want this extra check, they can probably use
> RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> explicitly.

Not generally. I prefer keeping it brief.
We could add a _SAFE variant with this extra check, like LIST_FOREACH has LIST_FOREACH_SAFE (although for a different purpose).

Come to think of it: In the name of brevity, consider renaming RTE_LCORE_VAR_VALUE to RTE_LCORE_VAR. (And RTE_LCORE_VAR_FOREACH_VALUE to RTE_LCORE_VAR_FOREACH.) We want to see these everywhere in the code.

> 
> > +
> > +/**
> > + * Iterate over each lcore id's value for an lcore variable.
> > + *
> > + * @param value
> > + *   A pointer successively set to point to lcore variable value
> > + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
> > + * @param handle
> > + *   The lcore variable handle.
> > + */
> > +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
> > +	for (unsigned int lcore_id =					\
> > +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0);
> \
> > +	     lcore_id < RTE_MAX_LCORE;					\
> > +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
> handle))
> 
> Might be a bit better (and safer) to make lcore_id a macro parameter?
> I.E.:
> define RTE_LCORE_VAR_FOREACH_VALUE(value, handle, lcore_id) \
> for ((lcore_id) = ...

The same thought have struck me, so I checked the scope of lcore_id.
The scope of lcore_id remains limited to the for loop, i.e. it is available inside the for loop, but not after it.
IMO this suffices, and lcore_id doesn't need to be a macro parameter.
Maybe renaming lcore_id to _lcore_id would be an improvement, if lcore_id is already defined and used for other purposes within the for loop.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-16 17:39                                         ` Morten Brørup
@ 2024-09-16 23:19                                           ` Konstantin Ananyev
  2024-09-17  7:12                                             ` Morten Brørup
  0 siblings, 1 reply; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-16 23:19 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob


> > > +/**
> > > + * Get pointer to lcore variable instance of the current thread.
> > > + *
> > > + * May only be used by EAL threads and registered non-EAL threads.
> > > + */
> > > +#define RTE_LCORE_VAR_VALUE(handle) \
> > > +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> >
> > Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
> > After all if people do not want this extra check, they can probably use
> > RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> > explicitly.
> 
> Not generally. I prefer keeping it brief.
> We could add a _SAFE variant with this extra check, like LIST_FOREACH has LIST_FOREACH_SAFE (although for a different purpose).
> 
> Come to think of it: In the name of brevity, consider renaming RTE_LCORE_VAR_VALUE to RTE_LCORE_VAR. (And
> RTE_LCORE_VAR_FOREACH_VALUE to RTE_LCORE_VAR_FOREACH.) We want to see these everywhere in the code.

Well, it is not about brevity...
I just feel  uncomfortable that our own public macro doesn't check value
returned by rte_lcore_id() and introduce a possible out-of-bound memory access. 

 
> >
> > > +
> > > +/**
> > > + * Iterate over each lcore id's value for an lcore variable.
> > > + *
> > > + * @param value
> > > + *   A pointer successively set to point to lcore variable value
> > > + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
> > > + * @param handle
> > > + *   The lcore variable handle.
> > > + */
> > > +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
> > > +	for (unsigned int lcore_id =					\
> > > +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0);
> > \
> > > +	     lcore_id < RTE_MAX_LCORE;					\
> > > +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
> > handle))
> >
> > Might be a bit better (and safer) to make lcore_id a macro parameter?
> > I.E.:
> > define RTE_LCORE_VAR_FOREACH_VALUE(value, handle, lcore_id) \
> > for ((lcore_id) = ...
> 
> The same thought have struck me, so I checked the scope of lcore_id.
> The scope of lcore_id remains limited to the for loop, i.e. it is available inside the for loop, but not after it.

Variable with the same name (and type) can be defined by used before the loop,
With the intention to use it inside the loop.
Just like it happens here (in patch #2):
+	unsigned int lcore_id;
.....
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+


> IMO this suffices, and lcore_id doesn't need to be a macro parameter.
> Maybe renaming lcore_id to _lcore_id would be an improvement, if lcore_id is already defined and used for other purposes within
> the for loop.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-16 23:19                                           ` Konstantin Ananyev
@ 2024-09-17  7:12                                             ` Morten Brørup
  2024-09-17  8:09                                               ` Konstantin Ananyev
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-17  7:12 UTC (permalink / raw)
  To: Konstantin Ananyev, Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

> From: Konstantin Ananyev [mailto:konstantin.ananyev@huawei.com]
> Sent: Tuesday, 17 September 2024 01.20
> 
> > > > +/**
> > > > + * Get pointer to lcore variable instance of the current thread.
> > > > + *
> > > > + * May only be used by EAL threads and registered non-EAL threads.
> > > > + */
> > > > +#define RTE_LCORE_VAR_VALUE(handle) \
> > > > +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> > >
> > > Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
> > > After all if people do not want this extra check, they can probably use
> > > RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> > > explicitly.
> >
> > Not generally. I prefer keeping it brief.
> > We could add a _SAFE variant with this extra check, like LIST_FOREACH has
> LIST_FOREACH_SAFE (although for a different purpose).
> >
> > Come to think of it: In the name of brevity, consider renaming
> RTE_LCORE_VAR_VALUE to RTE_LCORE_VAR. (And
> > RTE_LCORE_VAR_FOREACH_VALUE to RTE_LCORE_VAR_FOREACH.) We want to see these
> everywhere in the code.
> 
> Well, it is not about brevity...
> I just feel  uncomfortable that our own public macro doesn't check value
> returned by rte_lcore_id() and introduce a possible out-of-bound memory
> access.

For performance reasons, we generally don't check parameter validity in fast path functions/macros; lots of code in DPDK uses ptr->array[rte_lcore_id()] without checking rte_lcore_id() validity.
We shouldn't do it here either.

There's a secondary benefit:
RTE_LCORE_VAR_VALUE() returns a pointer, so this macro can always be used.
Especially, the pointer can be initialized with other variables at the start of a function:
struct mystruct * const state = RTE_LCORE_VAR_VALUE(state_handle);
The out-of-bound memory access will occur if dereferencing the pointer.

> 
> 
> > >
> > > > +
> > > > +/**
> > > > + * Iterate over each lcore id's value for an lcore variable.
> > > > + *
> > > > + * @param value
> > > > + *   A pointer successively set to point to lcore variable value
> > > > + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
> > > > + * @param handle
> > > > + *   The lcore variable handle.
> > > > + */
> > > > +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
> > > > +	for (unsigned int lcore_id =					\
> > > > +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)),
> 0);
> > > \
> > > > +	     lcore_id < RTE_MAX_LCORE;					\
> > > > +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
> > > handle))
> > >
> > > Might be a bit better (and safer) to make lcore_id a macro parameter?
> > > I.E.:
> > > define RTE_LCORE_VAR_FOREACH_VALUE(value, handle, lcore_id) \
> > > for ((lcore_id) = ...
> >
> > The same thought have struck me, so I checked the scope of lcore_id.
> > The scope of lcore_id remains limited to the for loop, i.e. it is available
> inside the for loop, but not after it.
> 
> Variable with the same name (and type) can be defined by used before the loop,
> With the intention to use it inside the loop.
> Just like it happens here (in patch #2):
> +	unsigned int lcore_id;
> .....
> +	/* take the opportunity to test the foreach macro */
> +	int *v;
> +	lcore_id = 0;
> +	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
> +		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
> +				  "Unexpected value on lcore %d during "
> +				  "iteration", lcore_id);
> +		lcore_id++;
> +	}
> +
> 

You convinced me here, Konstantin.
Adding the iterator (lcore_id) as a macro parameter reduces the risk of bugs, and has no real disadvantages.

> 
> > IMO this suffices, and lcore_id doesn't need to be a macro parameter.
> > Maybe renaming lcore_id to _lcore_id would be an improvement, if lcore_id is
> already defined and used for other purposes within
> > the for loop.

PS:
We discussed the _VALUE postfix previously, Mattias, and I agreed to it. But now that I have become more familiar with the code, I think the _VALUE postfix should be dropped.
I'm usually in favor of long variable/function/macro names, arguing that they improve code readability.
But I don't think the _VALUE postfix really improves readability.
Especially when RTE_LCORE_VAR() has become widely used, and everyone is familiar with it, a long name (RTE_LCORE_VAR_VALUE()) will be more annoying than helpful.


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-17  7:12                                             ` Morten Brørup
@ 2024-09-17  8:09                                               ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-17  8:09 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob



> > From: Konstantin Ananyev [mailto:konstantin.ananyev@huawei.com]
> > Sent: Tuesday, 17 September 2024 01.20
> >
> > > > > +/**
> > > > > + * Get pointer to lcore variable instance of the current thread.
> > > > > + *
> > > > > + * May only be used by EAL threads and registered non-EAL threads.
> > > > > + */
> > > > > +#define RTE_LCORE_VAR_VALUE(handle) \
> > > > > +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> > > >
> > > > Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
> > > > After all if people do not want this extra check, they can probably use
> > > > RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> > > > explicitly.
> > >
> > > Not generally. I prefer keeping it brief.
> > > We could add a _SAFE variant with this extra check, like LIST_FOREACH has
> > LIST_FOREACH_SAFE (although for a different purpose).
> > >
> > > Come to think of it: In the name of brevity, consider renaming
> > RTE_LCORE_VAR_VALUE to RTE_LCORE_VAR. (And
> > > RTE_LCORE_VAR_FOREACH_VALUE to RTE_LCORE_VAR_FOREACH.) We want to see these
> > everywhere in the code.
> >
> > Well, it is not about brevity...
> > I just feel  uncomfortable that our own public macro doesn't check value
> > returned by rte_lcore_id() and introduce a possible out-of-bound memory
> > access.
> 
> For performance reasons, we generally don't check parameter validity in fast path functions/macros; lots of code in DPDK uses ptr-
> >array[rte_lcore_id()] without checking rte_lcore_id() validity.

Yes there are plenty of such places inside DPDK...
Ok, I'll leave it for the author to decide, after all there is a clear comment
in front of it forbidding to use that macro for non-EAL threads.
Hope users will read it before using ;)

> We shouldn't do it here either.
> 
> There's a secondary benefit:
> RTE_LCORE_VAR_VALUE() returns a pointer, so this macro can always be used.
> Especially, the pointer can be initialized with other variables at the start of a function:
> struct mystruct * const state = RTE_LCORE_VAR_VALUE(state_handle);
> The out-of-bound memory access will occur if dereferencing the pointer.
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-16 14:02                                       ` Konstantin Ananyev
  2024-09-16 17:39                                         ` Morten Brørup
@ 2024-09-17 14:28                                         ` Mattias Rönnblom
  2024-09-17 16:11                                           ` Konstantin Ananyev
  2024-09-17 16:29                                           ` Konstantin Ananyev
  1 sibling, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:28 UTC (permalink / raw)
  To: Konstantin Ananyev, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob

On 2024-09-16 16:02, Konstantin Ananyev wrote:
> 
> 
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small, frequently-accessed data structures, for which one instance
>> should exist for each lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> LGTM in general, few small questions (mostly nits), see below.
>   
>> --- /dev/null
>> +++ b/lib/eal/common/eal_common_lcore_var.c
>> @@ -0,0 +1,78 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#include <inttypes.h>
>> +#include <stdlib.h>
>> +
>> +#ifdef RTE_EXEC_ENV_WINDOWS
>> +#include <malloc.h>
>> +#endif
>> +
>> +#include <rte_common.h>
>> +#include <rte_debug.h>
>> +#include <rte_log.h>
>> +
>> +#include <rte_lcore_var.h>
>> +
>> +#include "eal_private.h"
>> +
>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>> +
>> +static void *lcore_buffer;
>> +static size_t offset = RTE_MAX_LCORE_VAR;
>> +
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	void *handle;
>> +	void *value;
>> +
>> +	offset = RTE_ALIGN_CEIL(offset, align);
>> +
>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
>> +#ifdef RTE_EXEC_ENV_WINDOWS
>> +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
>> +					       RTE_CACHE_LINE_SIZE);
>> +#else
>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>> +					     LCORE_BUFFER_SIZE);
>> +#endif
> 
> Don't remember did that question already arise or not:
> For debugging and health-checking purposes - would it make sense to link all
> lcore_buffer values into a linked list?
> So user/developer/some tool can walk over it to check that provided handle value
> is really a valid lcore_var, etc.
> 

At least you could add some basic statistics, like the total size 
allocated my lcore variables, and the number of variables.

One could also add tracing.

>> +		RTE_VERIFY(lcore_buffer != NULL);
>> +
>> +		offset = 0;
>> +	}
>> +
>> +	handle = RTE_PTR_ADD(lcore_buffer, offset);
>> +
>> +	offset += size;
>> +
>> +	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
>> +		memset(value, 0, size);
>> +
>> +	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>> +		"%"PRIuPTR"-byte alignment", size, align);
>> +
>> +	return handle;
>> +}
>> +
>> +void *
>> +rte_lcore_var_alloc(size_t size, size_t align)
>> +{
>> +	/* Having the per-lcore buffer size aligned on cache lines
>> +	 * assures as well as having the base pointer aligned on cache
>> +	 * size assures that aligned offsets also translate to alipgned
>> +	 * pointers across all values.
>> +	 */
>> +	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
>> +	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
>> +	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
>> +
>> +	/* '0' means asking for worst-case alignment requirements */
>> +	if (align == 0)
>> +		align = alignof(max_align_t);
>> +
>> +	RTE_ASSERT(rte_is_power_of_2(align));
>> +
>> +	return lcore_var_alloc(size, align);
>> +}
> 
> ....
> 
>> diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
>> new file mode 100644
>> index 0000000000..ec3ab714a8
>> --- /dev/null
>> +++ b/lib/eal/include/rte_lcore_var.h
> 
> ...
> 
>> +/**
>> + * Given the lcore variable type, produces the type of the lcore
>> + * variable handle.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
>> +	type *
>> +
>> +/**
>> + * Define an lcore variable handle.
>> + *
>> + * This macro defines a variable which is used as a handle to access
>> + * the various instances of a per-lcore id variable.
>> + *
>> + * The aim with this macro is to make clear at the point of
>> + * declaration that this is an lcore handle, rather than a regular
>> + * pointer.
>> + *
>> + * Add @b static as a prefix in case the lcore variable is only to be
>> + * accessed from a particular translation unit.
>> + */
>> +#define RTE_LCORE_VAR_HANDLE(type, name)	\
>> +	RTE_LCORE_VAR_HANDLE_TYPE(type) name
>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
>> +	handle = rte_lcore_var_alloc(size, align)
>> +
>> +/**
>> + * Allocate space for an lcore variable, and initialize its handle,
>> + * with values aligned for any type of object.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
>> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
>> +
>> +/**
>> + * Allocate space for an lcore variable of the size and alignment requirements
>> + * suggested by the handle pointer type, and initialize its handle.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_ALLOC(handle)					\
>> +	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
>> +				       alignof(typeof(*(handle))))
>> +
>> +/**
>> + * Allocate an explicitly-sized, explicitly-aligned lcore variable by
>> + * means of a @ref RTE_INIT constructor.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
>> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
>> +	{								\
>> +		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
>> +	}
>> +
>> +/**
>> + * Allocate an explicitly-sized lcore variable by means of a @ref
>> + * RTE_INIT constructor.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
>> +	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
>> +
>> +/**
>> + * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
>> + *
>> + * The values of the lcore variable are initialized to zero.
>> + */
>> +#define RTE_LCORE_VAR_INIT(name)					\
>> +	RTE_INIT(rte_lcore_var_init_ ## name)				\
>> +	{								\
>> +		RTE_LCORE_VAR_ALLOC(name);				\
>> +	}
>> +
>> +/**
>> + * Get void pointer to lcore variable instance with the specified
>> + * lcore id.
>> + *
>> + * @param lcore_id
>> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
>> + *   instances should be accessed. The lcore id need not be valid
>> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
>> + *   is also not valid (and thus should not be dereferenced).
>> + * @param handle
>> + *   The lcore variable handle.
>> + */
>> +static inline void *
>> +rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
>> +{
>> +	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
>> +}
>> +
>> +/**
>> + * Get pointer to lcore variable instance with the specified lcore id.
>> + *
>> + * @param lcore_id
>> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
>> + *   instances should be accessed. The lcore id need not be valid
>> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
>> + *   is also not valid (and thus should not be dereferenced).
>> + * @param handle
>> + *   The lcore variable handle.
>> + */
>> +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
>> +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
>> +
>> +/**
>> + * Get pointer to lcore variable instance of the current thread.
>> + *
>> + * May only be used by EAL threads and registered non-EAL threads.
>> + */
>> +#define RTE_LCORE_VAR_VALUE(handle) \
>> +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> 
> Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
> After all if people do not want this extra check, they can probably use
> RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> explicitly.
> 

It would make sense, if it was an RTE_ASSERT(). Otherwise, I don't think 
so. Attempting to gracefully handle API violations is bad practice, imo.

>> +
>> +/**
>> + * Iterate over each lcore id's value for an lcore variable.
>> + *
>> + * @param value
>> + *   A pointer successively set to point to lcore variable value
>> + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
>> + * @param handle
>> + *   The lcore variable handle.
>> + */
>> +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
>> +	for (unsigned int lcore_id =					\
>> +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
>> +	     lcore_id < RTE_MAX_LCORE;					\
>> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
> 
> Might be a bit better (and safer) to make lcore_id a macro parameter?
> I.E.:
> define RTE_LCORE_VAR_FOREACH_VALUE(value, handle, lcore_id) \
> for ((lcore_id) = ...
> 

Why?

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 0/7] Lcore variables
  2024-09-16 10:52                                     ` [PATCH v4 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-16 14:02                                       ` Konstantin Ananyev
@ 2024-09-17 14:32                                       ` Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                                           ` (6 more replies)
  1 sibling, 7 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

This patch set introduces a new API <rte_lcore_var.h> for static
per-lcore id data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

Mattias Rönnblom (7):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable functional tests
  eal: add lcore variable performance test
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 MAINTAINERS                                   |   6 +
 app/test/meson.build                          |   2 +
 app/test/test_lcore_var.c                     | 432 ++++++++++++++++++
 app/test/test_lcore_var_perf.c                | 257 +++++++++++
 config/rte_config.h                           |   1 +
 doc/api/doxy-api-index.md                     |   1 +
 .../prog_guide/env_abstraction_layer.rst      |  45 +-
 doc/guides/rel_notes/release_24_11.rst        |  14 +
 lib/eal/common/eal_common_lcore_var.c         |  78 ++++
 lib/eal/common/meson.build                    |   1 +
 lib/eal/common/rte_random.c                   |  28 +-
 lib/eal/common/rte_service.c                  | 115 ++---
 lib/eal/include/meson.build                   |   1 +
 lib/eal/include/rte_lcore_var.h               | 385 ++++++++++++++++
 lib/eal/version.map                           |   2 +
 lib/eal/x86/rte_power_intrinsics.c            |  17 +-
 lib/power/rte_power_pmd_mgmt.c                |  34 +-
 17 files changed, 1326 insertions(+), 93 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 app/test/test_lcore_var_perf.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 1/7] eal: add static per-lcore memory allocation facility
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
@ 2024-09-17 14:32                                         ` Mattias Rönnblom
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 2/7] eal: add lcore variable functional tests Mattias Rönnblom
                                                           ` (5 subsequent siblings)
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v5:
 * Update EAL programming guide.

PATCH v2:
 * Add Windows support. (Morten Brørup)
 * Fix lcore variables API index reference. (Morten Brørup)
 * Various improvements of the API documentation. (Morten Brørup)
 * Elimination of unused symbol in version.map. (Morten Brørup)

PATCH:
 * Update MAINTAINERS and release notes.
 * Stop covering included files in extern "C" {}.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.
---
 MAINTAINERS                                   |   6 +
 config/rte_config.h                           |   1 +
 doc/api/doxy-api-index.md                     |   1 +
 .../prog_guide/env_abstraction_layer.rst      |  45 +-
 doc/guides/rel_notes/release_24_11.rst        |  14 +
 lib/eal/common/eal_common_lcore_var.c         |  78 ++++
 lib/eal/common/meson.build                    |   1 +
 lib/eal/include/meson.build                   |   1 +
 lib/eal/include/rte_lcore_var.h               | 385 ++++++++++++++++++
 lib/eal/version.map                           |   2 +
 10 files changed, 528 insertions(+), 6 deletions(-)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c5a703b5c0..362d9a3f28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
 F: lib/eal/common/rte_random.c
 F: app/test/test_rand_perf.c
 
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
 ARM v7
 M: Wathsala Vithanage <wathsala.vithanage@arm.com>
 F: config/arm/
diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f9f0300126..ed577f14ee 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore variables](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 9559c12a98..12b49672a6 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -433,12 +433,45 @@ with them once they're registered.
 Per-lcore and Shared Variables
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-.. note::
-
-    lcore refers to a logical execution unit of the processor, sometimes called a hardware *thread*.
-
-Shared variables are the default behavior.
-Per-lcore variables are implemented using *Thread Local Storage* (TLS) to provide per-thread local storage.
+By default static variables, blocks allocated on the DPDK heap, and
+other type of memory is shared by all DPDK threads.
+
+An application, a DPDK library or PMD may keep opt to keep per-thread
+state.
+
+Per-thread data may be maintained using either *lcore variables*
+(``rte_lcore_var.h``), *thread-local storage (TLS)*
+(``rte_per_lcore.h``), or a static array of ``RTE_MAX_LCORE``
+elements, index by ``rte_lcore_id()``. These methods allows for
+per-lcore data to be a largely module-internal affair, and not
+directly visible in its API. Another possibility is to have deal
+explicitly with per-thread aspects in the API (e.g., the ports of the
+Eventdev API).
+
+Lcore varibles are suitable for small object statically allocated at
+the time of module or application initialization. An lcore variable
+take on one value for each lcore id-equipped thread (i.e., for EAL
+threads and registered non-EAL threads, in total ``RTE_MAX_LCORE``
+instances). The lifetime of lcore variables are detached from that of
+the owning threads, and may thus be initialized prior to the owner
+having been created.
+
+Variables with thread-local storage are allocated at the time of
+thread creation, and exists until the thread terminates, for every
+thread in the process. Only very small object should be allocated in
+TLS, since large TLS objects significantly slows down thread creation
+and may needlessly increase memory footprint for application that make
+extensive use of unregistered threads.
+
+A common but now largely obsolete DPDK pattern is to use a static
+array sized according to the maximum number of lcore id-equipped
+threads (i.e., with ``RTE_MAX_LCORE`` elements). To avoid *false
+sharing*, each element must both cache-aligned, and include a
+``RTE_CACHE_GUARD``. Such extensive use of padding cause internal
+fragmentation (i.e., unused space) and lower cache hit rates.
+
+For more discussions on per-lcore state, see the ``rte_lcore_var.h``
+API documentation.
 
 Logs
 ~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 0ff70d9057..a3884f7491 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -55,6 +55,20 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added EAL per-lcore static memory allocation facility.**
+
+    Added EAL API <rte_lcore_var.h> for statically allocating small,
+    frequently-accessed data structures, for which one instance should
+    exist for each EAL thread and registered non-EAL thread.
+
+    With lcore variables, data is organized spatially on a per-lcore id
+    basis, rather than per library or PMD, avoiding the need for cache
+    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+    reduces CPU cache internal fragmentation, improving performance.
+
+    Lcore variables are similar to thread-local storage (TLS, e.g.,
+    C11 _Thread_local), but decoupling the values' life time from that
+    of the threads.
 
 Removed Items
 -------------
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..309822039b
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+#include <malloc.h>
+#endif
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+#ifdef RTE_EXEC_ENV_WINDOWS
+		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
+					       RTE_CACHE_LINE_SIZE);
+#else
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+#endif
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..ec3ab714a8
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Lcore variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * instance for each current and future lcore id-equipped thread, with
+ * a total of RTE_MAX_LCORE instances. The value of an lcore variable
+ * for a particular lcore id is independent from other values (for
+ * other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handle type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define an lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by two different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of an lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always be the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this requires
+ * sizing data structures (e.g., using `__rte_cache_aligned`) to an
+ * even number of cache lines to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables have the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which makes use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must be initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define an lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handle, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable is only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handle pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for an lcore variable.
+ *
+ * @param value
+ *   A pointer successively set to point to lcore variable value
+ *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
+	for (unsigned int lcore_id =					\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for an lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The variable's handle, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index e3ff412683..0c80bf7331 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,8 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 2/7] eal: add lcore variable functional tests
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-17 14:32                                         ` Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 3/7] eal: add lcore variable performance test Mattias Rönnblom
                                                           ` (4 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add functional test suite to exercise the <rte_lcore_var.h> API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 432 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 433 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..48279522f0 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..e07d13460f
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,432 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 3/7] eal: add lcore variable performance test
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 2/7] eal: add lcore variable functional tests Mattias Rönnblom
@ 2024-09-17 14:32                                         ` Mattias Rönnblom
  2024-09-17 15:40                                           ` Morten Brørup
  2024-09-17 14:32                                         ` [PATCH v5 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
                                                           ` (3 subsequent siblings)
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add basic micro benchmark for lcore variables, in an attempt to assure
that the overhead isn't significantly greater than alternative
approaches, in scenarios where the benefits aren't expected to show up
(i.e., when plenty of cache is available compared to the working set
size of the per-lcore data).

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

--

PATCH v5:
 * Add variant of thread-local storage with initialization performed
   at the time of thread creation to the benchmark scenarios. (Morten
   Brørup)

PATCH v4:
 * Rework the tests to be a little less unrealistic. Instead of a
   single dummy module using a single variable, use a number of
   variables/modules. In this way, differences in cache effects may
   show up.
 * Add RTE_CACHE_GUARD to better mimic that static array pattern.
   (Morten Brørup)
 * Show latencies as TSC cycles. (Morten Brørup)
---
 app/test/meson.build           |   1 +
 app/test/test_lcore_var_perf.c | 257 +++++++++++++++++++++++++++++++++
 2 files changed, 258 insertions(+)
 create mode 100644 app/test/test_lcore_var_perf.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 48279522f0..d4e0c59900 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -104,6 +104,7 @@ source_file_deps = {
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
     'test_lcore_var.c': [],
+    'test_lcore_var_perf.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
new file mode 100644
index 0000000000..538286d01b
--- /dev/null
+++ b/app/test/test_lcore_var_perf.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#define MAX_MODS 1024
+
+#include <stdio.h>
+
+#include <rte_bitops.h>
+#include <rte_cycles.h>
+#include <rte_lcore_var.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+struct mod_lcore_state {
+	uint64_t a;
+	uint64_t b;
+	uint64_t sum;
+};
+
+static void
+mod_init(struct mod_lcore_state *state)
+{
+	state->a = rte_rand();
+	state->b = rte_rand();
+	state->sum = 0;
+}
+
+static __rte_always_inline void
+mod_update(volatile struct mod_lcore_state *state)
+{
+	state->sum += state->a * state->b;
+}
+
+struct __rte_cache_aligned mod_lcore_state_aligned {
+	struct mod_lcore_state mod_state;
+
+	RTE_CACHE_GUARD;
+};
+
+static struct mod_lcore_state_aligned
+sarray_lcore_state[MAX_MODS][RTE_MAX_LCORE];
+
+static void
+sarray_init(void)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		struct mod_lcore_state *mod_state =
+			&sarray_lcore_state[mod][lcore_id].mod_state;
+
+		mod_init(mod_state);
+	}
+}
+
+static __rte_noinline void
+sarray_update(unsigned int mod)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	struct mod_lcore_state *mod_state =
+		&sarray_lcore_state[mod][lcore_id].mod_state;
+
+	mod_update(mod_state);
+}
+
+struct mod_lcore_state_lazy {
+	struct mod_lcore_state mod_state;
+	bool initialized;
+};
+
+/*
+ * Note: it's usually a bad idea have this much thread-local storage
+ * allocated in a real application, since it will incur a cost on
+ * thread creation and non-lcore thread memory usage.
+ */
+static RTE_DEFINE_PER_LCORE(struct mod_lcore_state_lazy,
+			    tls_lcore_state)[MAX_MODS];
+
+static inline void
+tls_init(struct mod_lcore_state_lazy *state)
+{
+	mod_init(&state->mod_state);
+
+	state->initialized = true;
+}
+
+static __rte_noinline void
+tls_lazy_update(unsigned int mod)
+{
+	struct mod_lcore_state_lazy *state =
+		&RTE_PER_LCORE(tls_lcore_state[mod]);
+
+	/* With thread-local storage, initialization must usually be lazy */
+	if (!state->initialized)
+		tls_init(state);
+
+	mod_update(&state->mod_state);
+}
+
+static __rte_noinline void
+tls_update(unsigned int mod)
+{
+	struct mod_lcore_state_lazy *state =
+		&RTE_PER_LCORE(tls_lcore_state[mod]);
+
+	mod_update(&state->mod_state);
+}
+
+RTE_LCORE_VAR_HANDLE(struct mod_lcore_state, lvar_lcore_state)[MAX_MODS];
+
+static void
+lvar_init(void)
+{
+	unsigned int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		RTE_LCORE_VAR_ALLOC(lvar_lcore_state[mod]);
+
+		struct mod_lcore_state *state =
+			RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+		mod_init(state);
+	}
+}
+
+static __rte_noinline void
+lvar_update(unsigned int mod)
+{
+	struct mod_lcore_state *state =
+		RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+	mod_update(state);
+}
+
+static void
+shuffle(unsigned int *elems, size_t len)
+{
+	size_t i;
+
+	for (i = len - 1; i > 0; i--) {
+		unsigned int other = rte_rand_max(i + 1);
+
+		unsigned int tmp = elems[other];
+		elems[other] = elems[i];
+		elems[i] = tmp;
+	}
+}
+
+#define ITERATIONS UINT64_C(10000000)
+
+static inline double
+benchmark_access(const unsigned int *mods, unsigned int num_mods,
+		 void (*init_fun)(void), void (*update_fun)(unsigned int))
+{
+	unsigned int i;
+	double start;
+	double end;
+	double latency;
+	unsigned int num_mods_mask = num_mods - 1;
+
+	RTE_VERIFY(rte_is_power_of_2(num_mods));
+
+	if (init_fun != NULL)
+		init_fun();
+
+	/* Warm up cache and make sure TLS variables are initialized */
+	for (i = 0; i < num_mods; i++)
+		update_fun(i);
+
+	start = rte_rdtsc();
+
+	for (i = 0; i < ITERATIONS; i++)
+		update_fun(mods[i & num_mods_mask]);
+
+	end = rte_rdtsc();
+
+	latency = (end - start) / ITERATIONS;
+
+	return latency;
+}
+
+static void
+test_lcore_var_access_n(unsigned int num_mods)
+{
+	double sarray_latency;
+	double tls_latency;
+	double lazy_tls_latency;
+	double lvar_latency;
+	unsigned int mods[num_mods];
+	unsigned int i;
+
+	for (i = 0; i < num_mods; i++)
+		mods[i] = i;
+
+	shuffle(mods, num_mods);
+
+	sarray_latency =
+		benchmark_access(mods, num_mods, sarray_init, sarray_update);
+
+	tls_latency =
+		benchmark_access(mods, num_mods, NULL, tls_update);
+
+	lazy_tls_latency =
+		benchmark_access(mods, num_mods, NULL, tls_lazy_update);
+
+	lvar_latency =
+		benchmark_access(mods, num_mods, lvar_init, lvar_update);
+
+	printf("%17u %8.1f %14.1f %15.1f %10.1f\n", num_mods, sarray_latency,
+	       tls_latency, lazy_tls_latency, lvar_latency);
+}
+
+/*
+ * The potential performance benefit of lcore variables compared to
+ * the use of statically sized, lcore id-indexed arrays are not
+ * shorter latencies in a scenario with low cache pressure, but rather
+ * fewer cache misses in a real-world scenario, with extensive cache
+ * usage. These tests are a crude simulation of such, using <N> dummy
+ * modules, each wiht a small, per-lcore state. Note however that
+ * these tests has very little non-lcore/thread local state, which is
+ * unrealistic.
+ */
+
+static int
+test_lcore_var_access(void)
+{
+	unsigned int num_mods = 1;
+
+	printf("- Latencies [TSC cycles/update] -\n");
+	printf("Number of           Static   Thread-local    Thread-local      Lcore\n");
+	printf("Modules/Variables    Array        Storage  Storage (Lazy)  Variables\n");
+
+	for (num_mods = 1; num_mods <= MAX_MODS; num_mods *= 2)
+		test_lcore_var_access_n(num_mods);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable perf autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_lcore_var_access),
+		TEST_CASES_END()
+	},
+};
+
+static int
+test_lcore_var_perf(void)
+{
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_PERF_TEST(lcore_var_perf_autotest, test_lcore_var_perf);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 4/7] random: keep PRNG state in lcore variable
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
                                                           ` (2 preceding siblings ...)
  2024-09-17 14:32                                         ` [PATCH v5 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-17 14:32                                         ` Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 5/7] power: keep per-lcore " Mattias Rönnblom
                                                           ` (2 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 5/7] power: keep per-lcore state in lcore variable
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
                                                           ` (3 preceding siblings ...)
  2024-09-17 14:32                                         ` [PATCH v5 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-17 14:32                                         ` Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 6/7] service: " Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

RFC v3:
 * Replace for loop with FOREACH macro.
---
 lib/power/rte_power_pmd_mgmt.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a5139dd4f7 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,21 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 6/7] service: keep per-lcore state in lcore variable
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
                                                           ` (4 preceding siblings ...)
  2024-09-17 14:32                                         ` [PATCH v5 5/7] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-17 14:32                                         ` Mattias Rönnblom
  2024-09-17 14:32                                         ` [PATCH v5 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 115 +++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..03379f1588 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +449,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +462,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +484,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +530,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +546,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +567,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +584,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +636,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +688,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +706,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +731,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +755,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +779,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +809,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +818,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +879,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +895,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +942,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +971,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +983,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1022,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v5 7/7] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
                                                           ` (5 preceding siblings ...)
  2024-09-17 14:32                                         ` [PATCH v5 6/7] service: " Mattias Rönnblom
@ 2024-09-17 14:32                                         ` Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-17 14:32 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v5 3/7] eal: add lcore variable performance test
  2024-09-17 14:32                                         ` [PATCH v5 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-17 15:40                                           ` Morten Brørup
  2024-09-18  6:05                                             ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Morten Brørup @ 2024-09-17 15:40 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

> +	start = rte_rdtsc();
> +
> +	for (i = 0; i < ITERATIONS; i++)
> +		update_fun(mods[i & num_mods_mask]);

This indexing adds more instructions to be executed than just the update function.
The added overhead is the same for all tested access methods, so the absolute difference in latency (i.e. measured in cycles) is still perfectly valid.
Just mentioning it; no change required.

> +
> +	end = rte_rdtsc();
> +
> +	latency = (end - start) / ITERATIONS;

This calculation is integer; add (double) somewhere to make it floating point.

> +
> +	return latency;
> +}


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-17 14:28                                         ` Mattias Rönnblom
@ 2024-09-17 16:11                                           ` Konstantin Ananyev
  2024-09-18  7:00                                             ` Mattias Rönnblom
  2024-09-17 16:29                                           ` Konstantin Ananyev
  1 sibling, 1 reply; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-17 16:11 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob

> >> +
> >> +/**
> >> + * Get pointer to lcore variable instance with the specified lcore id.
> >> + *
> >> + * @param lcore_id
> >> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
> >> + *   instances should be accessed. The lcore id need not be valid
> >> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
> >> + *   is also not valid (and thus should not be dereferenced).
> >> + * @param handle
> >> + *   The lcore variable handle.
> >> + */
> >> +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
> >> +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
> >> +
> >> +/**
> >> + * Get pointer to lcore variable instance of the current thread.
> >> + *
> >> + * May only be used by EAL threads and registered non-EAL threads.
> >> + */
> >> +#define RTE_LCORE_VAR_VALUE(handle) \
> >> +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> >
> > Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
> > After all if people do not want this extra check, they can probably use
> > RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
> > explicitly.
> >
> 
> It would make sense, if it was an RTE_ASSERT(). Otherwise, I don't think
> so. Attempting to gracefully handle API violations is bad practice, imo.

Ok, RTE_ASSERT() might be a good compromise.
As I said in another mail for that thread, I wouldn't insist here.

> 
> >> +
> >> +/**
> >> + * Iterate over each lcore id's value for an lcore variable.
> >> + *
> >> + * @param value
> >> + *   A pointer successively set to point to lcore variable value
> >> + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
> >> + * @param handle
> >> + *   The lcore variable handle.
> >> + */
> >> +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
> >> +	for (unsigned int lcore_id =					\
> >> +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
> >> +	     lcore_id < RTE_MAX_LCORE;					\
> >> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
> >
> > Might be a bit better (and safer) to make lcore_id a macro parameter?
> > I.E.:
> > define RTE_LCORE_VAR_FOREACH_VALUE(value, handle, lcore_id) \
> > for ((lcore_id) = ...
> >
> 
> Why?

Variable with the same name (and type) can be defined by user before the loop,
With the intention to use it inside the loop.
Just like it happens here (in patch #2):
+	unsigned int lcore_id;
.....
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	lcore_id = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		lcore_id++;
+	}
+
 




^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-17 14:28                                         ` Mattias Rönnblom
  2024-09-17 16:11                                           ` Konstantin Ananyev
@ 2024-09-17 16:29                                           ` Konstantin Ananyev
  2024-09-18  7:50                                             ` Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-17 16:29 UTC (permalink / raw)
  To: Mattias Rönnblom, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob


> >> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> >> +
> >> +static void *lcore_buffer;
> >> +static size_t offset = RTE_MAX_LCORE_VAR;
> >> +
> >> +static void *
> >> +lcore_var_alloc(size_t size, size_t align)
> >> +{
> >> +	void *handle;
> >> +	void *value;
> >> +
> >> +	offset = RTE_ALIGN_CEIL(offset, align);
> >> +
> >> +	if (offset + size > RTE_MAX_LCORE_VAR) {
> >> +#ifdef RTE_EXEC_ENV_WINDOWS
> >> +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> >> +					       RTE_CACHE_LINE_SIZE);
> >> +#else
> >> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >> +					     LCORE_BUFFER_SIZE);
> >> +#endif
> >
> > Don't remember did that question already arise or not:
> > For debugging and health-checking purposes - would it make sense to link all
> > lcore_buffer values into a linked list?
> > So user/developer/some tool can walk over it to check that provided handle value
> > is really a valid lcore_var, etc.
> >
> 
> At least you could add some basic statistics, like the total size
> allocated my lcore variables, and the number of variables.

My thought was more about easing debugging/health-cheking,
but yes, some stats can also be collected.

> One could also add tracing.
> 
> >> +		RTE_VERIFY(lcore_buffer != NULL);
> >> +
> >> +		offset = 0;
> >> +	}
> >> +

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v5 3/7] eal: add lcore variable performance test
  2024-09-17 15:40                                           ` Morten Brørup
@ 2024-09-18  6:05                                             ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  6:05 UTC (permalink / raw)
  To: Morten Brørup, Mattias Rönnblom, dev
  Cc: Stephen Hemminger, Konstantin Ananyev, David Marchand, Jerin Jacob

On 2024-09-17 17:40, Morten Brørup wrote:
>> +	start = rte_rdtsc();
>> +
>> +	for (i = 0; i < ITERATIONS; i++)
>> +		update_fun(mods[i & num_mods_mask]);
> 
> This indexing adds more instructions to be executed than just the update function.
> The added overhead is the same for all tested access methods, so the absolute difference in latency (i.e. measured in cycles) is still perfectly valid.
> Just mentioning it; no change required.
> 
>> +
>> +	end = rte_rdtsc();
>> +
>> +	latency = (end - start) / ITERATIONS;
> 
> This calculation is integer; add (double) somewhere to make it floating point.
> 

Indeed, it is. Will fix.

>> +
>> +	return latency;
>> +}
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-17 16:11                                           ` Konstantin Ananyev
@ 2024-09-18  7:00                                             ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  7:00 UTC (permalink / raw)
  To: Konstantin Ananyev, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob

On 2024-09-17 18:11, Konstantin Ananyev wrote:
>>>> +
>>>> +/**
>>>> + * Get pointer to lcore variable instance with the specified lcore id.
>>>> + *
>>>> + * @param lcore_id
>>>> + *   The lcore id specifying which of the @c RTE_MAX_LCORE value
>>>> + *   instances should be accessed. The lcore id need not be valid
>>>> + *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
>>>> + *   is also not valid (and thus should not be dereferenced).
>>>> + * @param handle
>>>> + *   The lcore variable handle.
>>>> + */
>>>> +#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
>>>> +	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
>>>> +
>>>> +/**
>>>> + * Get pointer to lcore variable instance of the current thread.
>>>> + *
>>>> + * May only be used by EAL threads and registered non-EAL threads.
>>>> + */
>>>> +#define RTE_LCORE_VAR_VALUE(handle) \
>>>> +	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
>>>
>>> Would it make sense to check that rte_lcore_id() !=  LCORE_ID_ANY?
>>> After all if people do not want this extra check, they can probably use
>>> RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
>>> explicitly.
>>>
>>
>> It would make sense, if it was an RTE_ASSERT(). Otherwise, I don't think
>> so. Attempting to gracefully handle API violations is bad practice, imo.
> 
> Ok, RTE_ASSERT() might be a good compromise.
> As I said in another mail for that thread, I wouldn't insist here.
> 

After a having a closer look at this issue, I'm not so sure any more. 
Such an assertion would disallow the use of the macros to retrieve a 
potentially-invalid pointer, which is then never used, in case it is 
invalid.

>>
>>>> +
>>>> +/**
>>>> + * Iterate over each lcore id's value for an lcore variable.
>>>> + *
>>>> + * @param value
>>>> + *   A pointer successively set to point to lcore variable value
>>>> + *   corresponding to every lcore id (up to @c RTE_MAX_LCORE).
>>>> + * @param handle
>>>> + *   The lcore variable handle.
>>>> + */
>>>> +#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle)			\
>>>> +	for (unsigned int lcore_id =					\
>>>> +		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
>>>> +	     lcore_id < RTE_MAX_LCORE;					\
>>>> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
>>>
>>> Might be a bit better (and safer) to make lcore_id a macro parameter?
>>> I.E.:
>>> define RTE_LCORE_VAR_FOREACH_VALUE(value, handle, lcore_id) \
>>> for ((lcore_id) = ...
>>>
>>
>> Why?
> 
> Variable with the same name (and type) can be defined by user before the loop,
> With the intention to use it inside the loop.
> Just like it happens here (in patch #2):
> +	unsigned int lcore_id;
> .....
> +	/* take the opportunity to test the foreach macro */
> +	int *v;
> +	lcore_id = 0;
> +	RTE_LCORE_VAR_FOREACH_VALUE(v, test_int) {
> +		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
> +				  "Unexpected value on lcore %d during "
> +				  "iteration", lcore_id);
> +		lcore_id++;
> +	}
> +
>   
> 

Indeed. I'll change it. I suppose you could also have issues if you 
nested the macro, although those could be solved by using something like 
__COUNTER__ to create a unique name.

Supplying the variable name does defeat part of the purpose of the 
RTE_LCORE_VAR_FOREACH_VALUE.

> 
> 

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v4 1/7] eal: add static per-lcore memory allocation facility
  2024-09-17 16:29                                           ` Konstantin Ananyev
@ 2024-09-18  7:50                                             ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  7:50 UTC (permalink / raw)
  To: Konstantin Ananyev, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob

On 2024-09-17 18:29, Konstantin Ananyev wrote:
> 
>>>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>>>> +
>>>> +static void *lcore_buffer;
>>>> +static size_t offset = RTE_MAX_LCORE_VAR;
>>>> +
>>>> +static void *
>>>> +lcore_var_alloc(size_t size, size_t align)
>>>> +{
>>>> +	void *handle;
>>>> +	void *value;
>>>> +
>>>> +	offset = RTE_ALIGN_CEIL(offset, align);
>>>> +
>>>> +	if (offset + size > RTE_MAX_LCORE_VAR) {
>>>> +#ifdef RTE_EXEC_ENV_WINDOWS
>>>> +		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
>>>> +					       RTE_CACHE_LINE_SIZE);
>>>> +#else
>>>> +		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>>>> +					     LCORE_BUFFER_SIZE);
>>>> +#endif
>>>
>>> Don't remember did that question already arise or not:
>>> For debugging and health-checking purposes - would it make sense to link all
>>> lcore_buffer values into a linked list?
>>> So user/developer/some tool can walk over it to check that provided handle value
>>> is really a valid lcore_var, etc.
>>>
>>
>> At least you could add some basic statistics, like the total size
>> allocated my lcore variables, and the number of variables.
> 
> My thought was more about easing debugging/health-cheking,
> but yes, some stats can also be collected.
> 

Statistics could be used for debugging and maybe some kind of 
rudimentary sanity check.

Maintaining per-variable state is not necessarily something you want to 
do, at least not close (spatially) to the lcore variable values.

In summary, I'm yet to form an opinion what, if anything, we should have 
here to help debugging. To avoid bloat, I would suggest this being 
deferred up to a point where we have more experience with lcore variables.

>> One could also add tracing.
>>
>>>> +		RTE_VERIFY(lcore_buffer != NULL);
>>>> +
>>>> +		offset = 0;
>>>> +	}
>>>> +

^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 0/7] Lcore variables
  2024-09-17 14:32                                         ` [PATCH v5 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-18  8:00                                           ` Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                                               ` (6 more replies)
  0 siblings, 7 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

This patch set introduces a new API <rte_lcore_var.h> for static
per-lcore id data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

Mattias Rönnblom (7):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable functional tests
  eal: add lcore variable performance test
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 MAINTAINERS                                   |   6 +
 app/test/meson.build                          |   2 +
 app/test/test_lcore_var.c                     | 436 ++++++++++++++++++
 app/test/test_lcore_var_perf.c                | 257 +++++++++++
 config/rte_config.h                           |   1 +
 doc/api/doxy-api-index.md                     |   1 +
 .../prog_guide/env_abstraction_layer.rst      |  45 +-
 doc/guides/rel_notes/release_24_11.rst        |  14 +
 lib/eal/common/eal_common_lcore_var.c         |  79 ++++
 lib/eal/common/meson.build                    |   1 +
 lib/eal/common/rte_random.c                   |  28 +-
 lib/eal/common/rte_service.c                  | 115 ++---
 lib/eal/include/meson.build                   |   1 +
 lib/eal/include/rte_lcore_var.h               | 388 ++++++++++++++++
 lib/eal/version.map                           |   2 +
 lib/eal/x86/rte_power_intrinsics.c            |  17 +-
 lib/power/rte_power_pmd_mgmt.c                |  35 +-
 17 files changed, 1335 insertions(+), 93 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 app/test/test_lcore_var_perf.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 1/7] eal: add static per-lcore memory allocation facility
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
@ 2024-09-18  8:00                                             ` Mattias Rönnblom
  2024-09-18  8:24                                               ` Konstantin Ananyev
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 2/7] eal: add lcore variable functional tests Mattias Rönnblom
                                                               ` (5 subsequent siblings)
  6 siblings, 2 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v6:
 * Have API user provide the loop variable in the FOREACH macro, to
   avoid subtle bugs where the loop variable name clashes with some
   other user-defined variable. (Konstantin Ananyev)

PATCH v5:
 * Update EAL programming guide.

PATCH v2:
 * Add Windows support. (Morten Brørup)
 * Fix lcore variables API index reference. (Morten Brørup)
 * Various improvements of the API documentation. (Morten Brørup)
 * Elimination of unused symbol in version.map. (Morten Brørup)

PATCH:
 * Update MAINTAINERS and release notes.
 * Stop covering included files in extern "C" {}.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.
---
 MAINTAINERS                                   |   6 +
 config/rte_config.h                           |   1 +
 doc/api/doxy-api-index.md                     |   1 +
 .../prog_guide/env_abstraction_layer.rst      |  45 +-
 doc/guides/rel_notes/release_24_11.rst        |  14 +
 lib/eal/common/eal_common_lcore_var.c         |  79 ++++
 lib/eal/common/meson.build                    |   1 +
 lib/eal/include/meson.build                   |   1 +
 lib/eal/include/rte_lcore_var.h               | 388 ++++++++++++++++++
 lib/eal/version.map                           |   2 +
 10 files changed, 532 insertions(+), 6 deletions(-)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c5a703b5c0..362d9a3f28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
 F: lib/eal/common/rte_random.c
 F: app/test/test_rand_perf.c
 
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
 ARM v7
 M: Wathsala Vithanage <wathsala.vithanage@arm.com>
 F: config/arm/
diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f9f0300126..ed577f14ee 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore variables](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 9559c12a98..12b49672a6 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -433,12 +433,45 @@ with them once they're registered.
 Per-lcore and Shared Variables
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-.. note::
-
-    lcore refers to a logical execution unit of the processor, sometimes called a hardware *thread*.
-
-Shared variables are the default behavior.
-Per-lcore variables are implemented using *Thread Local Storage* (TLS) to provide per-thread local storage.
+By default static variables, blocks allocated on the DPDK heap, and
+other type of memory is shared by all DPDK threads.
+
+An application, a DPDK library or PMD may keep opt to keep per-thread
+state.
+
+Per-thread data may be maintained using either *lcore variables*
+(``rte_lcore_var.h``), *thread-local storage (TLS)*
+(``rte_per_lcore.h``), or a static array of ``RTE_MAX_LCORE``
+elements, index by ``rte_lcore_id()``. These methods allows for
+per-lcore data to be a largely module-internal affair, and not
+directly visible in its API. Another possibility is to have deal
+explicitly with per-thread aspects in the API (e.g., the ports of the
+Eventdev API).
+
+Lcore varibles are suitable for small object statically allocated at
+the time of module or application initialization. An lcore variable
+take on one value for each lcore id-equipped thread (i.e., for EAL
+threads and registered non-EAL threads, in total ``RTE_MAX_LCORE``
+instances). The lifetime of lcore variables are detached from that of
+the owning threads, and may thus be initialized prior to the owner
+having been created.
+
+Variables with thread-local storage are allocated at the time of
+thread creation, and exists until the thread terminates, for every
+thread in the process. Only very small object should be allocated in
+TLS, since large TLS objects significantly slows down thread creation
+and may needlessly increase memory footprint for application that make
+extensive use of unregistered threads.
+
+A common but now largely obsolete DPDK pattern is to use a static
+array sized according to the maximum number of lcore id-equipped
+threads (i.e., with ``RTE_MAX_LCORE`` elements). To avoid *false
+sharing*, each element must both cache-aligned, and include a
+``RTE_CACHE_GUARD``. Such extensive use of padding cause internal
+fragmentation (i.e., unused space) and lower cache hit rates.
+
+For more discussions on per-lcore state, see the ``rte_lcore_var.h``
+API documentation.
 
 Logs
 ~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 0ff70d9057..a3884f7491 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -55,6 +55,20 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added EAL per-lcore static memory allocation facility.**
+
+    Added EAL API <rte_lcore_var.h> for statically allocating small,
+    frequently-accessed data structures, for which one instance should
+    exist for each EAL thread and registered non-EAL thread.
+
+    With lcore variables, data is organized spatially on a per-lcore id
+    basis, rather than per library or PMD, avoiding the need for cache
+    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+    reduces CPU cache internal fragmentation, improving performance.
+
+    Lcore variables are similar to thread-local storage (TLS, e.g.,
+    C11 _Thread_local), but decoupling the values' life time from that
+    of the threads.
 
 Removed Items
 -------------
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..6b7690795e
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+#include <malloc.h>
+#endif
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	unsigned int lcore_id;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+#ifdef RTE_EXEC_ENV_WINDOWS
+		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
+					       RTE_CACHE_LINE_SIZE);
+#else
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+#endif
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..e8db1391fe
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,388 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Lcore variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * instance for each current and future lcore id-equipped thread, with
+ * a total of RTE_MAX_LCORE instances. The value of an lcore variable
+ * for a particular lcore id is independent from other values (for
+ * other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for an @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handle type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define an lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by two different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of an lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always be the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         unsigned int lcore_id;
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this requires
+ * sizing data structures (e.g., using `__rte_cache_aligned`) to an
+ * even number of cache lines to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables have the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which makes use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must be initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define an lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handle, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable is only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handle pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for an lcore variable.
+ *
+ * @param lcore_id
+ *   An <code>unsigned int</code> variable successively set to the
+ *   lcore id of every valid lcore id (up to @c RTE_MAX_LCORE).
+ * @param value
+ *   A pointer variable successively set to point to lcore variable
+ *   value instance of the current lcore id being processed.
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, value, handle)		\
+	for (lcore_id =	(((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for an lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The variable's handle, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index e3ff412683..0c80bf7331 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,8 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 2/7] eal: add lcore variable functional tests
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-18  8:00                                             ` Mattias Rönnblom
  2024-09-18  8:25                                               ` Konstantin Ananyev
  2024-09-18  8:00                                             ` [PATCH v6 3/7] eal: add lcore variable performance test Mattias Rönnblom
                                                               ` (4 subsequent siblings)
  6 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add functional test suite to exercise the <rte_lcore_var.h> API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v6:
 * Update FOREACH invocations to match new API.

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 436 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 437 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..48279522f0 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..2a1f258548
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,436 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	unsigned int i = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, v, test_int) {
+		TEST_ASSERT_EQUAL(i, lcore_id, "Encountered lcore id %d "
+				  "while expecting %d during iteration",
+				  lcore_id, i);
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		i++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	unsigned int lcore_id;
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 3/7] eal: add lcore variable performance test
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 2/7] eal: add lcore variable functional tests Mattias Rönnblom
@ 2024-09-18  8:00                                             ` Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
                                                               ` (3 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add basic micro benchmark for lcore variables, in an attempt to assure
that the overhead isn't significantly greater than alternative
approaches, in scenarios where the benefits aren't expected to show up
(i.e., when plenty of cache is available compared to the working set
size of the per-lcore data).

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

--

PATCH v6:
 * Use floating point math when calculating per-update latency.
   (Morten Brørup)

PATCH v5:
 * Add variant of thread-local storage with initialization performed
   at the time of thread creation to the benchmark scenarios. (Morten
   Brørup)

PATCH v4:
 * Rework the tests to be a little less unrealistic. Instead of a
   single dummy module using a single variable, use a number of
   variables/modules. In this way, differences in cache effects may
   show up.
 * Add RTE_CACHE_GUARD to better mimic that static array pattern.
   (Morten Brørup)
 * Show latencies as TSC cycles. (Morten Brørup)
---
 app/test/meson.build           |   1 +
 app/test/test_lcore_var_perf.c | 257 +++++++++++++++++++++++++++++++++
 2 files changed, 258 insertions(+)
 create mode 100644 app/test/test_lcore_var_perf.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 48279522f0..d4e0c59900 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -104,6 +104,7 @@ source_file_deps = {
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
     'test_lcore_var.c': [],
+    'test_lcore_var_perf.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
new file mode 100644
index 0000000000..2680bfb6f7
--- /dev/null
+++ b/app/test/test_lcore_var_perf.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#define MAX_MODS 1024
+
+#include <stdio.h>
+
+#include <rte_bitops.h>
+#include <rte_cycles.h>
+#include <rte_lcore_var.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+struct mod_lcore_state {
+	uint64_t a;
+	uint64_t b;
+	uint64_t sum;
+};
+
+static void
+mod_init(struct mod_lcore_state *state)
+{
+	state->a = rte_rand();
+	state->b = rte_rand();
+	state->sum = 0;
+}
+
+static __rte_always_inline void
+mod_update(volatile struct mod_lcore_state *state)
+{
+	state->sum += state->a * state->b;
+}
+
+struct __rte_cache_aligned mod_lcore_state_aligned {
+	struct mod_lcore_state mod_state;
+
+	RTE_CACHE_GUARD;
+};
+
+static struct mod_lcore_state_aligned
+sarray_lcore_state[MAX_MODS][RTE_MAX_LCORE];
+
+static void
+sarray_init(void)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		struct mod_lcore_state *mod_state =
+			&sarray_lcore_state[mod][lcore_id].mod_state;
+
+		mod_init(mod_state);
+	}
+}
+
+static __rte_noinline void
+sarray_update(unsigned int mod)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	struct mod_lcore_state *mod_state =
+		&sarray_lcore_state[mod][lcore_id].mod_state;
+
+	mod_update(mod_state);
+}
+
+struct mod_lcore_state_lazy {
+	struct mod_lcore_state mod_state;
+	bool initialized;
+};
+
+/*
+ * Note: it's usually a bad idea have this much thread-local storage
+ * allocated in a real application, since it will incur a cost on
+ * thread creation and non-lcore thread memory usage.
+ */
+static RTE_DEFINE_PER_LCORE(struct mod_lcore_state_lazy,
+			    tls_lcore_state)[MAX_MODS];
+
+static inline void
+tls_init(struct mod_lcore_state_lazy *state)
+{
+	mod_init(&state->mod_state);
+
+	state->initialized = true;
+}
+
+static __rte_noinline void
+tls_lazy_update(unsigned int mod)
+{
+	struct mod_lcore_state_lazy *state =
+		&RTE_PER_LCORE(tls_lcore_state[mod]);
+
+	/* With thread-local storage, initialization must usually be lazy */
+	if (!state->initialized)
+		tls_init(state);
+
+	mod_update(&state->mod_state);
+}
+
+static __rte_noinline void
+tls_update(unsigned int mod)
+{
+	struct mod_lcore_state_lazy *state =
+		&RTE_PER_LCORE(tls_lcore_state[mod]);
+
+	mod_update(&state->mod_state);
+}
+
+RTE_LCORE_VAR_HANDLE(struct mod_lcore_state, lvar_lcore_state)[MAX_MODS];
+
+static void
+lvar_init(void)
+{
+	unsigned int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		RTE_LCORE_VAR_ALLOC(lvar_lcore_state[mod]);
+
+		struct mod_lcore_state *state =
+			RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+		mod_init(state);
+	}
+}
+
+static __rte_noinline void
+lvar_update(unsigned int mod)
+{
+	struct mod_lcore_state *state =
+		RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+	mod_update(state);
+}
+
+static void
+shuffle(unsigned int *elems, size_t len)
+{
+	size_t i;
+
+	for (i = len - 1; i > 0; i--) {
+		unsigned int other = rte_rand_max(i + 1);
+
+		unsigned int tmp = elems[other];
+		elems[other] = elems[i];
+		elems[i] = tmp;
+	}
+}
+
+#define ITERATIONS UINT64_C(10000000)
+
+static inline double
+benchmark_access(const unsigned int *mods, unsigned int num_mods,
+		 void (*init_fun)(void), void (*update_fun)(unsigned int))
+{
+	unsigned int i;
+	double start;
+	double end;
+	double latency;
+	unsigned int num_mods_mask = num_mods - 1;
+
+	RTE_VERIFY(rte_is_power_of_2(num_mods));
+
+	if (init_fun != NULL)
+		init_fun();
+
+	/* Warm up cache and make sure TLS variables are initialized */
+	for (i = 0; i < num_mods; i++)
+		update_fun(i);
+
+	start = rte_rdtsc();
+
+	for (i = 0; i < ITERATIONS; i++)
+		update_fun(mods[i & num_mods_mask]);
+
+	end = rte_rdtsc();
+
+	latency = (end - start) / (double)ITERATIONS;
+
+	return latency;
+}
+
+static void
+test_lcore_var_access_n(unsigned int num_mods)
+{
+	double sarray_latency;
+	double tls_latency;
+	double lazy_tls_latency;
+	double lvar_latency;
+	unsigned int mods[num_mods];
+	unsigned int i;
+
+	for (i = 0; i < num_mods; i++)
+		mods[i] = i;
+
+	shuffle(mods, num_mods);
+
+	sarray_latency =
+		benchmark_access(mods, num_mods, sarray_init, sarray_update);
+
+	tls_latency =
+		benchmark_access(mods, num_mods, NULL, tls_update);
+
+	lazy_tls_latency =
+		benchmark_access(mods, num_mods, NULL, tls_lazy_update);
+
+	lvar_latency =
+		benchmark_access(mods, num_mods, lvar_init, lvar_update);
+
+	printf("%17u %8.1f %14.1f %15.1f %10.1f\n", num_mods, sarray_latency,
+	       tls_latency, lazy_tls_latency, lvar_latency);
+}
+
+/*
+ * The potential performance benefit of lcore variables compared to
+ * the use of statically sized, lcore id-indexed arrays are not
+ * shorter latencies in a scenario with low cache pressure, but rather
+ * fewer cache misses in a real-world scenario, with extensive cache
+ * usage. These tests are a crude simulation of such, using <N> dummy
+ * modules, each wiht a small, per-lcore state. Note however that
+ * these tests has very little non-lcore/thread local state, which is
+ * unrealistic.
+ */
+
+static int
+test_lcore_var_access(void)
+{
+	unsigned int num_mods = 1;
+
+	printf("- Latencies [TSC cycles/update] -\n");
+	printf("Number of           Static   Thread-local    Thread-local      Lcore\n");
+	printf("Modules/Variables    Array        Storage  Storage (Lazy)  Variables\n");
+
+	for (num_mods = 1; num_mods <= MAX_MODS; num_mods *= 2)
+		test_lcore_var_access_n(num_mods);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable perf autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_lcore_var_access),
+		TEST_CASES_END()
+	},
+};
+
+static int
+test_lcore_var_perf(void)
+{
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_PERF_TEST(lcore_var_perf_autotest, test_lcore_var_perf);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 4/7] random: keep PRNG state in lcore variable
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
                                                               ` (2 preceding siblings ...)
  2024-09-18  8:00                                             ` [PATCH v6 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-18  8:00                                             ` Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 5/7] power: keep per-lcore " Mattias Rönnblom
                                                               ` (2 subsequent siblings)
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 5/7] power: keep per-lcore state in lcore variable
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
                                                               ` (3 preceding siblings ...)
  2024-09-18  8:00                                             ` [PATCH v6 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-18  8:00                                             ` Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 6/7] service: " Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

PATCH v6:
 * Update FOREACH invocation to match new API.

RFC v3:
 * Replace for loop with FOREACH macro.
---
 lib/power/rte_power_pmd_mgmt.c | 35 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a981db4b39 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,22 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	unsigned int lcore_id;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 6/7] service: keep per-lcore state in lcore variable
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
                                                               ` (4 preceding siblings ...)
  2024-09-18  8:00                                             ` [PATCH v6 5/7] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-18  8:00                                             ` Mattias Rönnblom
  2024-09-18  8:00                                             ` [PATCH v6 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 115 +++++++++++++++++++----------------
 1 file changed, 63 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..03379f1588 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,10 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +449,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +462,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +484,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +530,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +546,11 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +567,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +584,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +636,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +688,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +706,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +731,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +755,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +779,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +809,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +818,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +879,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +895,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +942,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +971,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +983,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1022,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v6 7/7] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
                                                               ` (5 preceding siblings ...)
  2024-09-18  8:00                                             ` [PATCH v6 6/7] service: " Mattias Rönnblom
@ 2024-09-18  8:00                                             ` Mattias Rönnblom
  6 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:00 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v6 1/7] eal: add static per-lcore memory allocation facility
  2024-09-18  8:00                                             ` [PATCH v6 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-18  8:24                                               ` Konstantin Ananyev
  2024-09-18  8:25                                                 ` Mattias Rönnblom
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
  1 sibling, 1 reply; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-18  8:24 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob

> +/**
> + * Iterate over each lcore id's value for an lcore variable.
> + *
> + * @param lcore_id
> + *   An <code>unsigned int</code> variable successively set to the
> + *   lcore id of every valid lcore id (up to @c RTE_MAX_LCORE).
> + * @param value
> + *   A pointer variable successively set to point to lcore variable
> + *   value instance of the current lcore id being processed.
> + * @param handle
> + *   The lcore variable handle.
> + */
> +#define RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, value, handle)		\
> +	for (lcore_id =	(((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
> +	     lcore_id < RTE_MAX_LCORE;					\
> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
> +

I think we need a '()' around references to lcore_id:
 for ((lcore_id) = (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
	     (lcore_id) < RTE_MAX_LCORE;					\
	     (lcore_id)++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v6 1/7] eal: add static per-lcore memory allocation facility
  2024-09-18  8:24                                               ` Konstantin Ananyev
@ 2024-09-18  8:25                                                 ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:25 UTC (permalink / raw)
  To: Konstantin Ananyev, Mattias Rönnblom, dev
  Cc: Morten Brørup, Stephen Hemminger, Konstantin Ananyev,
	David Marchand, Jerin Jacob

On 2024-09-18 10:24, Konstantin Ananyev wrote:
>> +/**
>> + * Iterate over each lcore id's value for an lcore variable.
>> + *
>> + * @param lcore_id
>> + *   An <code>unsigned int</code> variable successively set to the
>> + *   lcore id of every valid lcore id (up to @c RTE_MAX_LCORE).
>> + * @param value
>> + *   A pointer variable successively set to point to lcore variable
>> + *   value instance of the current lcore id being processed.
>> + * @param handle
>> + *   The lcore variable handle.
>> + */
>> +#define RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, value, handle)		\
>> +	for (lcore_id =	(((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
>> +	     lcore_id < RTE_MAX_LCORE;					\
>> +	     lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
>> +
> 
> I think we need a '()' around references to lcore_id:
>   for ((lcore_id) = (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
> 	     (lcore_id) < RTE_MAX_LCORE;					\
> 	     (lcore_id)++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))

Yes, of course. Thanks.

^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v6 2/7] eal: add lcore variable functional tests
  2024-09-18  8:00                                             ` [PATCH v6 2/7] eal: add lcore variable functional tests Mattias Rönnblom
@ 2024-09-18  8:25                                               ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-18  8:25 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob



> -----Original Message-----
> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Sent: Wednesday, September 18, 2024 9:01 AM
> To: dev@dpdk.org
> Cc: hofors@lysator.liu.se; Morten Brørup <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>;
> Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>; David Marchand <david.marchand@redhat.com>; Jerin Jacob
> <jerinj@marvell.com>; Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Subject: [PATCH v6 2/7] eal: add lcore variable functional tests
> 
> Add functional test suite to exercise the <rte_lcore_var.h> API.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

> 2.34.1
> 


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 0/7] Lcore variables
  2024-09-18  8:00                                             ` [PATCH v6 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-18  8:24                                               ` Konstantin Ananyev
@ 2024-09-18  8:26                                               ` Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
                                                                   ` (7 more replies)
  1 sibling, 8 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

This patch set introduces a new API <rte_lcore_var.h> for static
per-lcore id data allocation.

Please refer to the <rte_lcore_var.h> API documentation for both a
rationale for this new API, and a comparison to the alternatives
available.

The adoption of this API would affect many different DPDK modules, but
the author updated only a few, mostly to serve as examples in this
RFC, and to iron out some, but surely not all, wrinkles in the API.

The question on how to best allocate static per-lcore memory has been
up several times on the dev mailing list, for example in the thread on
"random: use per lcore state" RFC by Stephen Hemminger.

Lcore variables are surely not the answer to all your per-lcore-data
needs, since it only allows for more-or-less static allocation. In the
author's opinion, it does however provide a reasonably simple and
clean and seemingly very much performant solution to a real problem.

Mattias Rönnblom (7):
  eal: add static per-lcore memory allocation facility
  eal: add lcore variable functional tests
  eal: add lcore variable performance test
  random: keep PRNG state in lcore variable
  power: keep per-lcore state in lcore variable
  service: keep per-lcore state in lcore variable
  eal: keep per-lcore power intrinsics state in lcore variable

 MAINTAINERS                                   |   6 +
 app/test/meson.build                          |   2 +
 app/test/test_lcore_var.c                     | 436 ++++++++++++++++++
 app/test/test_lcore_var_perf.c                | 257 +++++++++++
 config/rte_config.h                           |   1 +
 doc/api/doxy-api-index.md                     |   1 +
 .../prog_guide/env_abstraction_layer.rst      |  45 +-
 doc/guides/rel_notes/release_24_11.rst        |  14 +
 lib/eal/common/eal_common_lcore_var.c         |  79 ++++
 lib/eal/common/meson.build                    |   1 +
 lib/eal/common/rte_random.c                   |  28 +-
 lib/eal/common/rte_service.c                  | 117 ++---
 lib/eal/include/meson.build                   |   1 +
 lib/eal/include/rte_lcore_var.h               | 390 ++++++++++++++++
 lib/eal/version.map                           |   2 +
 lib/eal/x86/rte_power_intrinsics.c            |  17 +-
 lib/power/rte_power_pmd_mgmt.c                |  35 +-
 17 files changed, 1339 insertions(+), 93 deletions(-)
 create mode 100644 app/test/test_lcore_var.c
 create mode 100644 app/test/test_lcore_var_perf.c
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 1/7] eal: add static per-lcore memory allocation facility
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
@ 2024-09-18  8:26                                                 ` Mattias Rönnblom
  2024-09-18  9:23                                                   ` Konstantin Ananyev
  2024-09-18  8:26                                                 ` [PATCH v7 2/7] eal: add lcore variable functional tests Mattias Rönnblom
                                                                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v7:
 * Add () to the FOREACH lcore id macro parameter, to allow arbitrary
   expression, not just a simple variable name, being passed.
   (Konstantin Ananyev)

PATCH v6:
 * Have API user provide the loop variable in the FOREACH macro, to
   avoid subtle bugs where the loop variable name clashes with some
   other user-defined variable. (Konstantin Ananyev)

PATCH v5:
 * Update EAL programming guide.

PATCH v2:
 * Add Windows support. (Morten Brørup)
 * Fix lcore variables API index reference. (Morten Brørup)
 * Various improvements of the API documentation. (Morten Brørup)
 * Elimination of unused symbol in version.map. (Morten Brørup)

PATCH:
 * Update MAINTAINERS and release notes.
 * Stop covering included files in extern "C" {}.

RFC v6:
 * Include <stdlib.h> to get aligned_alloc().
 * Tweak documentation (grammar).
 * Provide API-level guarantees that lcore variable values take on an
   initial value of zero.
 * Fix misplaced __rte_cache_aligned in the API doc example.

RFC v5:
 * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
 * The RTE_LCORE_VAR_GET() and SET() convience access macros
   covered an uncommon use case, where the lcore value is of a
   primitive type, rather than a struct, and is thus eliminated
   from the API. (Morten Brørup)
 * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
   RTE_LCORE_VAR_VALUE().
 * The underscores are removed from __rte_lcore_var_lcore_ptr() to
   signal that this function is a part of the public API.
 * Macro arguments are documented.

RFV v4:
 * Replace large static array with libc heap-allocated memory. One
   implication of this change is there no longer exists a fixed upper
   bound for the total amount of memory used by lcore variables.
   RTE_MAX_LCORE_VAR has changed meaning, and now represent the
   maximum size of any individual lcore variable value.
 * Fix issues in example. (Morten Brørup)
 * Improve access macro type checking. (Morten Brørup)
 * Refer to the lcore variable handle as "handle" and not "name" in
   various macros.
 * Document lack of thread safety in rte_lcore_var_alloc().
 * Provide API-level assurance the lcore variable handle is
   always non-NULL, to all applications to use NULL to mean
   "not yet allocated".
 * Note zero-sized allocations are not allowed.
 * Give API-level guarantee the lcore variable values are zeroed.

RFC v3:
 * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
 * Update example to reflect FOREACH macro name change (in RFC v2).

RFC v2:
 * Use alignof to derive alignment requirements. (Morten Brørup)
 * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
   *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
 * Allow user-specified alignment, but limit max to cache line size.
---
 MAINTAINERS                                   |   6 +
 config/rte_config.h                           |   1 +
 doc/api/doxy-api-index.md                     |   1 +
 .../prog_guide/env_abstraction_layer.rst      |  45 +-
 doc/guides/rel_notes/release_24_11.rst        |  14 +
 lib/eal/common/eal_common_lcore_var.c         |  79 ++++
 lib/eal/common/meson.build                    |   1 +
 lib/eal/include/meson.build                   |   1 +
 lib/eal/include/rte_lcore_var.h               | 390 ++++++++++++++++++
 lib/eal/version.map                           |   2 +
 10 files changed, 534 insertions(+), 6 deletions(-)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c5a703b5c0..362d9a3f28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
 F: lib/eal/common/rte_random.c
 F: app/test/test_rand_perf.c
 
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
 ARM v7
 M: Wathsala Vithanage <wathsala.vithanage@arm.com>
 F: config/arm/
diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..311692e498 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -41,6 +41,7 @@
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f9f0300126..ed577f14ee 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore variables](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 9559c12a98..12b49672a6 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -433,12 +433,45 @@ with them once they're registered.
 Per-lcore and Shared Variables
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-.. note::
-
-    lcore refers to a logical execution unit of the processor, sometimes called a hardware *thread*.
-
-Shared variables are the default behavior.
-Per-lcore variables are implemented using *Thread Local Storage* (TLS) to provide per-thread local storage.
+By default static variables, blocks allocated on the DPDK heap, and
+other type of memory is shared by all DPDK threads.
+
+An application, a DPDK library or PMD may keep opt to keep per-thread
+state.
+
+Per-thread data may be maintained using either *lcore variables*
+(``rte_lcore_var.h``), *thread-local storage (TLS)*
+(``rte_per_lcore.h``), or a static array of ``RTE_MAX_LCORE``
+elements, index by ``rte_lcore_id()``. These methods allows for
+per-lcore data to be a largely module-internal affair, and not
+directly visible in its API. Another possibility is to have deal
+explicitly with per-thread aspects in the API (e.g., the ports of the
+Eventdev API).
+
+Lcore varibles are suitable for small object statically allocated at
+the time of module or application initialization. An lcore variable
+take on one value for each lcore id-equipped thread (i.e., for EAL
+threads and registered non-EAL threads, in total ``RTE_MAX_LCORE``
+instances). The lifetime of lcore variables are detached from that of
+the owning threads, and may thus be initialized prior to the owner
+having been created.
+
+Variables with thread-local storage are allocated at the time of
+thread creation, and exists until the thread terminates, for every
+thread in the process. Only very small object should be allocated in
+TLS, since large TLS objects significantly slows down thread creation
+and may needlessly increase memory footprint for application that make
+extensive use of unregistered threads.
+
+A common but now largely obsolete DPDK pattern is to use a static
+array sized according to the maximum number of lcore id-equipped
+threads (i.e., with ``RTE_MAX_LCORE`` elements). To avoid *false
+sharing*, each element must both cache-aligned, and include a
+``RTE_CACHE_GUARD``. Such extensive use of padding cause internal
+fragmentation (i.e., unused space) and lower cache hit rates.
+
+For more discussions on per-lcore state, see the ``rte_lcore_var.h``
+API documentation.
 
 Logs
 ~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 0ff70d9057..a3884f7491 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -55,6 +55,20 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added EAL per-lcore static memory allocation facility.**
+
+    Added EAL API <rte_lcore_var.h> for statically allocating small,
+    frequently-accessed data structures, for which one instance should
+    exist for each EAL thread and registered non-EAL thread.
+
+    With lcore variables, data is organized spatially on a per-lcore id
+    basis, rather than per library or PMD, avoiding the need for cache
+    aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+    reduces CPU cache internal fragmentation, improving performance.
+
+    Lcore variables are similar to thread-local storage (TLS, e.g.,
+    C11 _Thread_local), but decoupling the values' life time from that
+    of the threads.
 
 Removed Items
 -------------
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..6b7690795e
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+#include <malloc.h>
+#endif
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+	void *handle;
+	unsigned int lcore_id;
+	void *value;
+
+	offset = RTE_ALIGN_CEIL(offset, align);
+
+	if (offset + size > RTE_MAX_LCORE_VAR) {
+#ifdef RTE_EXEC_ENV_WINDOWS
+		lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
+					       RTE_CACHE_LINE_SIZE);
+#else
+		lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+					     LCORE_BUFFER_SIZE);
+#endif
+		RTE_VERIFY(lcore_buffer != NULL);
+
+		offset = 0;
+	}
+
+	handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+	offset += size;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, value, handle)
+		memset(value, 0, size);
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, align);
+
+	return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+	/* Having the per-lcore buffer size aligned on cache lines
+	 * assures as well as having the base pointer aligned on cache
+	 * size assures that aligned offsets also translate to alipgned
+	 * pointers across all values.
+	 */
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+	RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+	RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+	/* '0' means asking for worst-case alignment requirements */
+	if (align == 0)
+		align = alignof(max_align_t);
+
+	RTE_ASSERT(rte_is_power_of_2(align));
+
+	return lcore_var_alloc(size, align);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@ sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@ headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..894100d1e4
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,390 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Lcore variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * instance for each current and future lcore id-equipped thread, with
+ * a total of RTE_MAX_LCORE instances. The value of an lcore variable
+ * for a particular lcore id is independent from other values (for
+ * other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for an @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handle type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define an lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ *     @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by two different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of an lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always be the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ *         unsigned int lcore_id;
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, state, lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this requires
+ * sizing data structures (e.g., using `__rte_cache_aligned`) to an
+ * even number of cache lines to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables have the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which makes use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. As a result, thread-local variables must be initialized in
+ *     a "lazy" manner (e.g., at the point of thread creation). Lcore
+ *     variables may be accessed immediately after having been
+ *     allocated (which may be prior any thread beyond the main
+ *     thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define an lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handle, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable is only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align)	\
+	handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size)	\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handle pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle)					\
+	RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)),	\
+				       alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align)		\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align);	\
+	}
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size)		\
+	RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+	return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ *   The lcore id specifying which of the @c RTE_MAX_LCORE value
+ *   instances should be accessed. The lcore id need not be valid
+ *   (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ *   is also not valid (and thus should not be dereferenced).
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle)			\
+	((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+	RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for an lcore variable.
+ *
+ * @param lcore_id
+ *   An <code>unsigned int</code> variable successively set to the
+ *   lcore id of every valid lcore id (up to @c RTE_MAX_LCORE).
+ * @param value
+ *   A pointer variable successively set to point to lcore variable
+ *   value instance of the current lcore id being processed.
+ * @param handle
+ *   The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, value, handle)		\
+	for ((lcore_id) =						\
+		     (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+	     (lcore_id) < RTE_MAX_LCORE;				\
+	     (lcore_id)++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, \
+							       handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for an lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ *   The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ *   If 0, the values will be suitably aligned for any kind of type
+ *   (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ *   on a multiple of *align*, which must be a power of 2 and equal or
+ *   less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ *   The variable's handle, stored in a void pointer value. The value
+ *   is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index e3ff412683..0c80bf7331 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -396,6 +396,8 @@ EXPERIMENTAL {
 
 	# added in 24.03
 	rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+	rte_lcore_var_alloc;
 };
 
 INTERNAL {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 2/7] eal: add lcore variable functional tests
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-18  8:26                                                 ` Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 3/7] eal: add lcore variable performance test Mattias Rönnblom
                                                                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add functional test suite to exercise the <rte_lcore_var.h> API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>

--

PATCH v6:
 * Update FOREACH invocations to match new API.

RFC v5:
 * Adapt tests to reflect the removal of the GET() and SET() macros.

RFC v4:
 * Check all lcore id's values for all variables in the many variables
   test case.
 * Introduce test case for max-sized lcore variables.

RFC v2:
 * Improve alignment-related test coverage.
---
 app/test/meson.build      |   1 +
 app/test/test_lcore_var.c | 436 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 437 insertions(+)
 create mode 100644 app/test/test_lcore_var.c

diff --git a/app/test/meson.build b/app/test/meson.build
index e29258e6ec..48279522f0 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -103,6 +103,7 @@ source_file_deps = {
     'test_ipsec_sad.c': ['ipsec'],
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
+    'test_lcore_var.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var.c b/app/test/test_lcore_var.c
new file mode 100644
index 0000000000..2a1f258548
--- /dev/null
+++ b/app/test/test_lcore_var.c
@@ -0,0 +1,436 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_launch.h>
+#include <rte_lcore_var.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#define MIN_LCORES 2
+
+RTE_LCORE_VAR_HANDLE(int, test_int);
+RTE_LCORE_VAR_HANDLE(char, test_char);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized);
+RTE_LCORE_VAR_HANDLE(short, test_short);
+RTE_LCORE_VAR_HANDLE(long, test_long_sized_aligned);
+
+struct int_checker_state {
+	int old_value;
+	int new_value;
+	bool success;
+};
+
+static void
+rand_blk(void *blk, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		((unsigned char *)blk)[i] = (unsigned char)rte_rand();
+}
+
+static bool
+is_ptr_aligned(const void *ptr, size_t align)
+{
+	return ptr != NULL ? (uintptr_t)ptr % align == 0 : false;
+}
+
+static int
+check_int(void *arg)
+{
+	struct int_checker_state *state = arg;
+
+	int *ptr = RTE_LCORE_VAR_VALUE(test_int);
+
+	bool naturally_aligned = is_ptr_aligned(ptr, sizeof(int));
+
+	bool equal = *(RTE_LCORE_VAR_VALUE(test_int)) == state->old_value;
+
+	state->success = equal && naturally_aligned;
+
+	*ptr = state->new_value;
+
+	return 0;
+}
+
+RTE_LCORE_VAR_INIT(test_int);
+RTE_LCORE_VAR_INIT(test_char);
+RTE_LCORE_VAR_INIT_SIZE(test_long_sized, 32);
+RTE_LCORE_VAR_INIT(test_short);
+RTE_LCORE_VAR_INIT_SIZE_ALIGN(test_long_sized_aligned, sizeof(long),
+			      RTE_CACHE_LINE_SIZE);
+
+static int
+test_int_lvar(void)
+{
+	unsigned int lcore_id;
+
+	struct int_checker_state states[RTE_MAX_LCORE] = {};
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+
+		state->old_value = (int)rte_rand();
+		state->new_value = (int)rte_rand();
+
+		*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int) =
+			state->old_value;
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_int, &states[lcore_id], lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct int_checker_state *state = &states[lcore_id];
+		int value;
+
+		TEST_ASSERT(state->success, "Unexpected value "
+			    "encountered on lcore %d", lcore_id);
+
+		value = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_int);
+		TEST_ASSERT_EQUAL(state->new_value, value,
+				  "Lcore %d failed to update int", lcore_id);
+	}
+
+	/* take the opportunity to test the foreach macro */
+	int *v;
+	unsigned int i = 0;
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, v, test_int) {
+		TEST_ASSERT_EQUAL(i, lcore_id, "Encountered lcore id %d "
+				  "while expecting %d during iteration",
+				  lcore_id, i);
+		TEST_ASSERT_EQUAL(states[lcore_id].new_value, *v,
+				  "Unexpected value on lcore %d during "
+				  "iteration", lcore_id);
+		i++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_sized_alignment(void)
+{
+	unsigned int lcore_id;
+	long *v;
+
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, v, test_long_sized) {
+		TEST_ASSERT(is_ptr_aligned(v, alignof(long)),
+			    "Type-derived alignment failed");
+	}
+
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, v, test_long_sized_aligned) {
+		TEST_ASSERT(is_ptr_aligned(v, RTE_CACHE_LINE_SIZE),
+			    "Explicit alignment failed");
+	}
+
+	return TEST_SUCCESS;
+}
+
+/* private, larger, struct */
+#define TEST_STRUCT_DATA_SIZE 1234
+
+struct test_struct {
+	uint8_t data[TEST_STRUCT_DATA_SIZE];
+};
+
+static RTE_LCORE_VAR_HANDLE(char, before_struct);
+static RTE_LCORE_VAR_HANDLE(struct test_struct, test_struct);
+static RTE_LCORE_VAR_HANDLE(char, after_struct);
+
+struct struct_checker_state {
+	struct test_struct old_value;
+	struct test_struct new_value;
+	bool success;
+};
+
+static int check_struct(void *arg)
+{
+	struct struct_checker_state *state = arg;
+
+	struct test_struct *lcore_struct = RTE_LCORE_VAR_VALUE(test_struct);
+
+	bool properly_aligned =
+		is_ptr_aligned(test_struct, alignof(struct test_struct));
+
+	bool equal = memcmp(lcore_struct->data, state->old_value.data,
+			    TEST_STRUCT_DATA_SIZE) == 0;
+
+	state->success = equal && properly_aligned;
+
+	memcpy(lcore_struct->data, state->new_value.data,
+	       TEST_STRUCT_DATA_SIZE);
+
+	return 0;
+}
+
+static int
+test_struct_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_struct);
+	RTE_LCORE_VAR_ALLOC(test_struct);
+	RTE_LCORE_VAR_ALLOC(after_struct);
+
+	struct struct_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+
+		rand_blk(state->old_value.data, TEST_STRUCT_DATA_SIZE);
+		rand_blk(state->new_value.data, TEST_STRUCT_DATA_SIZE);
+
+		memcpy(RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct)->data,
+		       state->old_value.data, TEST_STRUCT_DATA_SIZE);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_struct, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct struct_checker_state *state = &states[lcore_id];
+		struct test_struct *lstruct =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_struct);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = memcmp(lstruct->data, state->new_value.data,
+				    TEST_STRUCT_DATA_SIZE) == 0;
+
+		TEST_ASSERT(equal, "Lcore %d failed to update struct",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_struct);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_struct);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "struct was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "struct was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define TEST_ARRAY_SIZE 99
+
+typedef uint16_t test_array_t[TEST_ARRAY_SIZE];
+
+static void test_array_init_rand(test_array_t a)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		a[i] = (uint16_t)rte_rand();
+}
+
+static bool test_array_equal(test_array_t a, test_array_t b)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++) {
+		if (a[i] != b[i])
+			return false;
+	}
+	return true;
+}
+
+static void test_array_copy(test_array_t dst, const test_array_t src)
+{
+	size_t i;
+	for (i = 0; i < TEST_ARRAY_SIZE; i++)
+		dst[i] = src[i];
+}
+
+static RTE_LCORE_VAR_HANDLE(char, before_array);
+static RTE_LCORE_VAR_HANDLE(test_array_t, test_array);
+static RTE_LCORE_VAR_HANDLE(char, after_array);
+
+struct array_checker_state {
+	test_array_t old_value;
+	test_array_t new_value;
+	bool success;
+};
+
+static int check_array(void *arg)
+{
+	struct array_checker_state *state = arg;
+
+	test_array_t *lcore_array = RTE_LCORE_VAR_VALUE(test_array);
+
+	bool properly_aligned =
+		is_ptr_aligned(lcore_array, alignof(test_array_t));
+
+	bool equal = test_array_equal(*lcore_array, state->old_value);
+
+	state->success = equal && properly_aligned;
+
+	test_array_copy(*lcore_array, state->new_value);
+
+	return 0;
+}
+
+static int
+test_array_lvar(void)
+{
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC(before_array);
+	RTE_LCORE_VAR_ALLOC(test_array);
+	RTE_LCORE_VAR_ALLOC(after_array);
+
+	struct array_checker_state states[RTE_MAX_LCORE];
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+
+		test_array_init_rand(state->new_value);
+		test_array_init_rand(state->old_value);
+
+		test_array_copy(*RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+							   test_array),
+				state->old_value);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id)
+		rte_eal_remote_launch(check_array, &states[lcore_id],
+				      lcore_id);
+
+	rte_eal_mp_wait_lcore();
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		struct array_checker_state *state = &states[lcore_id];
+		test_array_t *larray =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, test_array);
+
+		TEST_ASSERT(state->success, "Unexpected value encountered on "
+			    "lcore %d", lcore_id);
+
+		bool equal = test_array_equal(*larray, state->new_value);
+
+		TEST_ASSERT(equal, "Lcore %d failed to update array",
+			    lcore_id);
+	}
+
+	RTE_LCORE_FOREACH_WORKER(lcore_id) {
+		char before =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, before_array);
+		char after =
+			*RTE_LCORE_VAR_LCORE_VALUE(lcore_id, after_array);
+
+		TEST_ASSERT_EQUAL(before, 0, "Lcore variable before test "
+				  "array was modified on lcore %d", lcore_id);
+		TEST_ASSERT_EQUAL(after, 0, "Lcore variable after test "
+				  "array was modified on lcore %d", lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+#define MANY_LVARS (2 * RTE_MAX_LCORE_VAR / sizeof(uint32_t))
+
+static int
+test_many_lvars(void)
+{
+	uint32_t **handlers = malloc(sizeof(uint32_t *) * MANY_LVARS);
+	unsigned int i;
+
+	TEST_ASSERT(handlers != NULL, "Unable to allocate memory");
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		RTE_LCORE_VAR_ALLOC(handlers[i]);
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t *v =
+				RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handlers[i]);
+			*v = (uint32_t)(i * lcore_id);
+		}
+	}
+
+	for (i = 0; i < MANY_LVARS; i++) {
+		unsigned int lcore_id;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+			uint32_t v = *RTE_LCORE_VAR_LCORE_VALUE(lcore_id,
+								handlers[i]);
+			TEST_ASSERT_EQUAL((uint32_t)(i * lcore_id), v,
+					  "Unexpected lcore variable value on "
+					  "lcore %d", lcore_id);
+		}
+	}
+
+	free(handlers);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_large_lvar(void)
+{
+	RTE_LCORE_VAR_HANDLE(unsigned char, large);
+	unsigned int lcore_id;
+
+	RTE_LCORE_VAR_ALLOC_SIZE(large, RTE_MAX_LCORE_VAR);
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+
+		memset(ptr, (unsigned char)lcore_id, RTE_MAX_LCORE_VAR);
+	}
+
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		unsigned char *ptr = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, large);
+		size_t i;
+
+		for (i = 0; i < RTE_MAX_LCORE_VAR; i++)
+			TEST_ASSERT_EQUAL(ptr[i], (unsigned char)lcore_id,
+					  "Large lcore variable value is "
+					  "corrupted on lcore %d.",
+					  lcore_id);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_int_lvar),
+		TEST_CASE(test_sized_alignment),
+		TEST_CASE(test_struct_lvar),
+		TEST_CASE(test_array_lvar),
+		TEST_CASE(test_many_lvars),
+		TEST_CASE(test_large_lvar),
+		TEST_CASES_END()
+	},
+};
+
+static int test_lcore_var(void)
+{
+	if (rte_lcore_count() < MIN_LCORES) {
+		printf("Not enough cores for lcore_var_autotest; expecting at "
+		       "least %d.\n", MIN_LCORES);
+		return TEST_SKIPPED;
+	}
+
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_FAST_TEST(lcore_var_autotest, true, false, test_lcore_var);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 3/7] eal: add lcore variable performance test
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 2/7] eal: add lcore variable functional tests Mattias Rönnblom
@ 2024-09-18  8:26                                                 ` Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
                                                                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom

Add basic micro benchmark for lcore variables, in an attempt to assure
that the overhead isn't significantly greater than alternative
approaches, in scenarios where the benefits aren't expected to show up
(i.e., when plenty of cache is available compared to the working set
size of the per-lcore data).

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

--

PATCH v6:
 * Use floating point math when calculating per-update latency.
   (Morten Brørup)

PATCH v5:
 * Add variant of thread-local storage with initialization performed
   at the time of thread creation to the benchmark scenarios. (Morten
   Brørup)

PATCH v4:
 * Rework the tests to be a little less unrealistic. Instead of a
   single dummy module using a single variable, use a number of
   variables/modules. In this way, differences in cache effects may
   show up.
 * Add RTE_CACHE_GUARD to better mimic that static array pattern.
   (Morten Brørup)
 * Show latencies as TSC cycles. (Morten Brørup)
---
 app/test/meson.build           |   1 +
 app/test/test_lcore_var_perf.c | 257 +++++++++++++++++++++++++++++++++
 2 files changed, 258 insertions(+)
 create mode 100644 app/test/test_lcore_var_perf.c

diff --git a/app/test/meson.build b/app/test/meson.build
index 48279522f0..d4e0c59900 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -104,6 +104,7 @@ source_file_deps = {
     'test_kvargs.c': ['kvargs'],
     'test_latencystats.c': ['ethdev', 'latencystats', 'metrics'] + sample_packet_forward_deps,
     'test_lcore_var.c': [],
+    'test_lcore_var_perf.c': [],
     'test_lcores.c': [],
     'test_link_bonding.c': ['ethdev', 'net_bond',
         'net'] + packet_burst_generator_deps + virtual_pmd_deps,
diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
new file mode 100644
index 0000000000..2680bfb6f7
--- /dev/null
+++ b/app/test/test_lcore_var_perf.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#define MAX_MODS 1024
+
+#include <stdio.h>
+
+#include <rte_bitops.h>
+#include <rte_cycles.h>
+#include <rte_lcore_var.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+struct mod_lcore_state {
+	uint64_t a;
+	uint64_t b;
+	uint64_t sum;
+};
+
+static void
+mod_init(struct mod_lcore_state *state)
+{
+	state->a = rte_rand();
+	state->b = rte_rand();
+	state->sum = 0;
+}
+
+static __rte_always_inline void
+mod_update(volatile struct mod_lcore_state *state)
+{
+	state->sum += state->a * state->b;
+}
+
+struct __rte_cache_aligned mod_lcore_state_aligned {
+	struct mod_lcore_state mod_state;
+
+	RTE_CACHE_GUARD;
+};
+
+static struct mod_lcore_state_aligned
+sarray_lcore_state[MAX_MODS][RTE_MAX_LCORE];
+
+static void
+sarray_init(void)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		struct mod_lcore_state *mod_state =
+			&sarray_lcore_state[mod][lcore_id].mod_state;
+
+		mod_init(mod_state);
+	}
+}
+
+static __rte_noinline void
+sarray_update(unsigned int mod)
+{
+	unsigned int lcore_id = rte_lcore_id();
+	struct mod_lcore_state *mod_state =
+		&sarray_lcore_state[mod][lcore_id].mod_state;
+
+	mod_update(mod_state);
+}
+
+struct mod_lcore_state_lazy {
+	struct mod_lcore_state mod_state;
+	bool initialized;
+};
+
+/*
+ * Note: it's usually a bad idea have this much thread-local storage
+ * allocated in a real application, since it will incur a cost on
+ * thread creation and non-lcore thread memory usage.
+ */
+static RTE_DEFINE_PER_LCORE(struct mod_lcore_state_lazy,
+			    tls_lcore_state)[MAX_MODS];
+
+static inline void
+tls_init(struct mod_lcore_state_lazy *state)
+{
+	mod_init(&state->mod_state);
+
+	state->initialized = true;
+}
+
+static __rte_noinline void
+tls_lazy_update(unsigned int mod)
+{
+	struct mod_lcore_state_lazy *state =
+		&RTE_PER_LCORE(tls_lcore_state[mod]);
+
+	/* With thread-local storage, initialization must usually be lazy */
+	if (!state->initialized)
+		tls_init(state);
+
+	mod_update(&state->mod_state);
+}
+
+static __rte_noinline void
+tls_update(unsigned int mod)
+{
+	struct mod_lcore_state_lazy *state =
+		&RTE_PER_LCORE(tls_lcore_state[mod]);
+
+	mod_update(&state->mod_state);
+}
+
+RTE_LCORE_VAR_HANDLE(struct mod_lcore_state, lvar_lcore_state)[MAX_MODS];
+
+static void
+lvar_init(void)
+{
+	unsigned int mod;
+
+	for (mod = 0; mod < MAX_MODS; mod++) {
+		RTE_LCORE_VAR_ALLOC(lvar_lcore_state[mod]);
+
+		struct mod_lcore_state *state =
+			RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+		mod_init(state);
+	}
+}
+
+static __rte_noinline void
+lvar_update(unsigned int mod)
+{
+	struct mod_lcore_state *state =
+		RTE_LCORE_VAR_VALUE(lvar_lcore_state[mod]);
+
+	mod_update(state);
+}
+
+static void
+shuffle(unsigned int *elems, size_t len)
+{
+	size_t i;
+
+	for (i = len - 1; i > 0; i--) {
+		unsigned int other = rte_rand_max(i + 1);
+
+		unsigned int tmp = elems[other];
+		elems[other] = elems[i];
+		elems[i] = tmp;
+	}
+}
+
+#define ITERATIONS UINT64_C(10000000)
+
+static inline double
+benchmark_access(const unsigned int *mods, unsigned int num_mods,
+		 void (*init_fun)(void), void (*update_fun)(unsigned int))
+{
+	unsigned int i;
+	double start;
+	double end;
+	double latency;
+	unsigned int num_mods_mask = num_mods - 1;
+
+	RTE_VERIFY(rte_is_power_of_2(num_mods));
+
+	if (init_fun != NULL)
+		init_fun();
+
+	/* Warm up cache and make sure TLS variables are initialized */
+	for (i = 0; i < num_mods; i++)
+		update_fun(i);
+
+	start = rte_rdtsc();
+
+	for (i = 0; i < ITERATIONS; i++)
+		update_fun(mods[i & num_mods_mask]);
+
+	end = rte_rdtsc();
+
+	latency = (end - start) / (double)ITERATIONS;
+
+	return latency;
+}
+
+static void
+test_lcore_var_access_n(unsigned int num_mods)
+{
+	double sarray_latency;
+	double tls_latency;
+	double lazy_tls_latency;
+	double lvar_latency;
+	unsigned int mods[num_mods];
+	unsigned int i;
+
+	for (i = 0; i < num_mods; i++)
+		mods[i] = i;
+
+	shuffle(mods, num_mods);
+
+	sarray_latency =
+		benchmark_access(mods, num_mods, sarray_init, sarray_update);
+
+	tls_latency =
+		benchmark_access(mods, num_mods, NULL, tls_update);
+
+	lazy_tls_latency =
+		benchmark_access(mods, num_mods, NULL, tls_lazy_update);
+
+	lvar_latency =
+		benchmark_access(mods, num_mods, lvar_init, lvar_update);
+
+	printf("%17u %8.1f %14.1f %15.1f %10.1f\n", num_mods, sarray_latency,
+	       tls_latency, lazy_tls_latency, lvar_latency);
+}
+
+/*
+ * The potential performance benefit of lcore variables compared to
+ * the use of statically sized, lcore id-indexed arrays are not
+ * shorter latencies in a scenario with low cache pressure, but rather
+ * fewer cache misses in a real-world scenario, with extensive cache
+ * usage. These tests are a crude simulation of such, using <N> dummy
+ * modules, each wiht a small, per-lcore state. Note however that
+ * these tests has very little non-lcore/thread local state, which is
+ * unrealistic.
+ */
+
+static int
+test_lcore_var_access(void)
+{
+	unsigned int num_mods = 1;
+
+	printf("- Latencies [TSC cycles/update] -\n");
+	printf("Number of           Static   Thread-local    Thread-local      Lcore\n");
+	printf("Modules/Variables    Array        Storage  Storage (Lazy)  Variables\n");
+
+	for (num_mods = 1; num_mods <= MAX_MODS; num_mods *= 2)
+		test_lcore_var_access_n(num_mods);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite lcore_var_testsuite = {
+	.suite_name = "lcore variable perf autotest",
+	.unit_test_cases = {
+		TEST_CASE(test_lcore_var_access),
+		TEST_CASES_END()
+	},
+};
+
+static int
+test_lcore_var_perf(void)
+{
+	return unit_test_suite_runner(&lcore_var_testsuite);
+}
+
+REGISTER_PERF_TEST(lcore_var_perf_autotest, test_lcore_var_perf);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 4/7] random: keep PRNG state in lcore variable
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
                                                                   ` (2 preceding siblings ...)
  2024-09-18  8:26                                                 ` [PATCH v7 3/7] eal: add lcore variable performance test Mattias Rönnblom
@ 2024-09-18  8:26                                                 ` Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 5/7] power: keep per-lcore " Mattias Rönnblom
                                                                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace keeping PRNG state in a RTE_MAX_LCORE-sized static array of
cache-aligned and RTE_CACHE_GUARDed struct instances with keeping the
same state in a more cache-friendly lcore variable.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

RFC v3:
 * Remove cache alignment on unregistered threads' rte_rand_state.
   (Morten Brørup)
---
 lib/eal/common/rte_random.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 90e91b3c4f..a8d00308dd 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -11,6 +11,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_cycles.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_random.h>
 
 struct __rte_cache_aligned rte_rand_state {
@@ -19,14 +20,12 @@ struct __rte_cache_aligned rte_rand_state {
 	uint64_t z3;
 	uint64_t z4;
 	uint64_t z5;
-	RTE_CACHE_GUARD;
 };
 
-/* One instance each for every lcore id-equipped thread, and one
- * additional instance to be shared by all others threads (i.e., all
- * unregistered non-EAL threads).
- */
-static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1];
+RTE_LCORE_VAR_HANDLE(struct rte_rand_state, rand_state);
+
+/* instance to be shared by all unregistered non-EAL threads */
+static struct rte_rand_state unregistered_rand_state;
 
 static uint32_t
 __rte_rand_lcg32(uint32_t *seed)
@@ -85,8 +84,14 @@ rte_srand(uint64_t seed)
 	unsigned int lcore_id;
 
 	/* add lcore_id to seed to avoid having the same sequence */
-	for (lcore_id = 0; lcore_id < RTE_DIM(rand_states); lcore_id++)
-		__rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		struct rte_rand_state *lcore_state =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore_id, rand_state);
+
+		__rte_srand_lfsr258(seed + lcore_id, lcore_state);
+	}
+
+	__rte_srand_lfsr258(seed + lcore_id, &unregistered_rand_state);
 }
 
 static __rte_always_inline uint64_t
@@ -124,11 +129,10 @@ struct rte_rand_state *__rte_rand_get_state(void)
 
 	idx = rte_lcore_id();
 
-	/* last instance reserved for unregistered non-EAL threads */
 	if (unlikely(idx == LCORE_ID_ANY))
-		idx = RTE_MAX_LCORE;
+		return &unregistered_rand_state;
 
-	return &rand_states[idx];
+	return RTE_LCORE_VAR_VALUE(rand_state);
 }
 
 uint64_t
@@ -228,6 +232,8 @@ RTE_INIT(rte_rand_init)
 {
 	uint64_t seed;
 
+	RTE_LCORE_VAR_ALLOC(rand_state);
+
 	seed = __rte_random_initial_seed();
 
 	rte_srand(seed);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 5/7] power: keep per-lcore state in lcore variable
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
                                                                   ` (3 preceding siblings ...)
  2024-09-18  8:26                                                 ` [PATCH v7 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
@ 2024-09-18  8:26                                                 ` Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 6/7] service: " Mattias Rönnblom
                                                                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

PATCH v6:
 * Update FOREACH invocation to match new API.

RFC v3:
 * Replace for loop with FOREACH macro.
---
 lib/power/rte_power_pmd_mgmt.c | 35 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index b1c18a5f56..a981db4b39 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_cycles.h>
 #include <rte_cpuflags.h>
 #include <rte_malloc.h>
@@ -69,7 +70,7 @@ struct __rte_cache_aligned pmd_core_cfg {
 	uint64_t sleep_target;
 	/**< Prevent a queue from triggering sleep multiple times */
 };
-static struct pmd_core_cfg lcore_cfgs[RTE_MAX_LCORE];
+static RTE_LCORE_VAR_HANDLE(struct pmd_core_cfg, lcore_cfgs);
 
 static inline bool
 queue_equal(const union queue *l, const union queue *r)
@@ -252,12 +253,11 @@ clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	/* early exit */
 	if (likely(!empty))
@@ -317,13 +317,12 @@ clb_pause(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	struct queue_list_entry *queue_conf = arg;
 	struct pmd_core_cfg *lcore_conf;
 	const bool empty = nb_rx == 0;
 	uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration();
 
-	lcore_conf = &lcore_cfgs[lcore];
+	lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 
 	if (likely(!empty))
 		/* early exit */
@@ -358,9 +357,8 @@ clb_scale_freq(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
 		struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
 		uint16_t max_pkts __rte_unused, void *arg)
 {
-	const unsigned int lcore = rte_lcore_id();
 	const bool empty = nb_rx == 0;
-	struct pmd_core_cfg *lcore_conf = &lcore_cfgs[lcore];
+	struct pmd_core_cfg *lcore_conf = RTE_LCORE_VAR_VALUE(lcore_cfgs);
 	struct queue_list_entry *queue_conf = arg;
 
 	if (likely(!empty)) {
@@ -518,7 +516,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		goto end;
 	}
 
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -619,7 +617,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	}
 
 	/* no need to check queue id as wrong queue id would not be enabled */
-	lcore_cfg = &lcore_cfgs[lcore_id];
+	lcore_cfg = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, lcore_cfgs);
 
 	/* check if other queues are stopped as well */
 	ret = cfg_queues_stopped(lcore_cfg);
@@ -769,21 +767,22 @@ rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 }
 
 RTE_INIT(rte_power_ethdev_pmgmt_init) {
-	size_t i;
-	int j;
+	unsigned int lcore_id;
+	struct pmd_core_cfg *lcore_cfg;
+	int i;
+
+	RTE_LCORE_VAR_ALLOC(lcore_cfgs);
 
 	/* initialize all tailqs */
-	for (i = 0; i < RTE_DIM(lcore_cfgs); i++) {
-		struct pmd_core_cfg *cfg = &lcore_cfgs[i];
-		TAILQ_INIT(&cfg->head);
-	}
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, lcore_cfg, lcore_cfgs)
+		TAILQ_INIT(&lcore_cfg->head);
 
 	/* initialize config defaults */
 	emptypoll_max = 512;
 	pause_duration = 1;
 	/* scaling defaults out of range to ensure not used unless set by user or app */
-	for (j = 0; j < RTE_MAX_LCORE; j++) {
-		scale_freq_min[j] = 0;
-		scale_freq_max[j] = UINT32_MAX;
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		scale_freq_min[i] = 0;
+		scale_freq_max[i] = UINT32_MAX;
 	}
 }
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 6/7] service: keep per-lcore state in lcore variable
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
                                                                   ` (4 preceding siblings ...)
  2024-09-18  8:26                                                 ` [PATCH v7 5/7] power: keep per-lcore " Mattias Rönnblom
@ 2024-09-18  8:26                                                 ` Mattias Rönnblom
  2024-09-18  8:26                                                 ` [PATCH v7 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
  2024-09-18  9:30                                                 ` [PATCH v7 0/7] Lcore variables fengchengwen
  7 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

--

PATCH v7:
 * Update to match new FOREACH API.

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 117 +++++++++++++++++++----------------
 1 file changed, 65 insertions(+), 52 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 56379930b6..59c4f77966 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
@@ -75,7 +76,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -101,12 +102,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -122,7 +119,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -136,7 +132,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -286,7 +281,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -294,9 +288,11 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	unsigned int lcore_id;
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		lcore_states[i].service_mask &= ~(UINT64_C(1) << id);
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, cs, lcore_states)
+		cs->service_mask &= ~(UINT64_C(1) << id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -454,7 +450,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (lcore_states[ids[i]].service_active_on_lcore[id])
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(ids[i], lcore_states);
+
+		if (cs->service_active_on_lcore[id])
 			return 1;
 	}
 
@@ -464,7 +463,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR_VALUE(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -486,8 +485,7 @@ service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
 	uint8_t i;
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_VALUE(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +531,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +547,12 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	unsigned int lcore_id;
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH_VALUE(lcore_id, cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +569,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +586,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,30 +638,31 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	uint64_t sid_mask = UINT64_C(1) << sid;
 	if (set) {
-		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
-			sid_mask;
+		uint64_t lcore_mapped = cs->service_mask & sid_mask;
 
 		if (*set && !lcore_mapped) {
-			lcore_states[lcore].service_mask |= sid_mask;
+			cs->service_mask |= sid_mask;
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			lcore_states[lcore].service_mask &= ~(sid_mask);
+			cs->service_mask &= ~(sid_mask);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
+		*enabled = !!(cs->service_mask & (sid_mask));
 
 	return 0;
 }
@@ -685,13 +690,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -702,14 +708,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all to mask 0 */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			lcore_states[i].service_mask = 0;
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(i, lcore_states);
+		if (cs->is_service_core) {
+			cs->service_mask = 0;
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -725,17 +733,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	lcore_states[lcore].service_mask = 0;
+	cs->service_mask = 0;
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -747,7 +757,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -771,7 +781,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -801,6 +811,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -808,12 +820,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 	uint64_t service_mask = cs->service_mask;
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
@@ -834,7 +845,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -845,7 +856,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -853,7 +864,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -861,7 +872,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -870,7 +881,7 @@ lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -886,7 +897,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -930,12 +944,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -960,7 +973,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -971,12 +985,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1011,7 +1024,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE_VALUE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* [PATCH v7 7/7] eal: keep per-lcore power intrinsics state in lcore variable
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
                                                                   ` (5 preceding siblings ...)
  2024-09-18  8:26                                                 ` [PATCH v7 6/7] service: " Mattias Rönnblom
@ 2024-09-18  8:26                                                 ` Mattias Rönnblom
  2024-09-18  9:30                                                 ` [PATCH v7 0/7] Lcore variables fengchengwen
  7 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-18  8:26 UTC (permalink / raw)
  To: dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob,
	Mattias Rönnblom, Konstantin Ananyev

Keep per-lcore power intrinsics state in a lcore variable to reduce
cache working set size and avoid any CPU next-line-prefetching causing
false sharing.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
 lib/eal/x86/rte_power_intrinsics.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 6d9b64240c..f4ba2c8ecb 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -6,6 +6,7 @@
 
 #include <rte_common.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_rtm.h>
 #include <rte_spinlock.h>
 
@@ -14,10 +15,14 @@
 /*
  * Per-lcore structure holding current status of C0.2 sleeps.
  */
-static alignas(RTE_CACHE_LINE_SIZE) struct power_wait_status {
+struct power_wait_status {
 	rte_spinlock_t lock;
 	volatile void *monitor_addr; /**< NULL if not currently sleeping */
-} wait_status[RTE_MAX_LCORE];
+};
+
+RTE_LCORE_VAR_HANDLE(struct power_wait_status, wait_status);
+
+RTE_LCORE_VAR_INIT(wait_status);
 
 /*
  * This function uses UMONITOR/UMWAIT instructions and will enter C0.2 state.
@@ -172,7 +177,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 	if (pmc->fn == NULL)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/* update sleep address */
 	rte_spinlock_lock(&s->lock);
@@ -264,7 +269,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	if (lcore_id >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	s = &wait_status[lcore_id];
+	s = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, wait_status);
 
 	/*
 	 * There is a race condition between sleep, wakeup and locking, but we
@@ -303,8 +308,8 @@ int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
 {
-	const unsigned int lcore_id = rte_lcore_id();
-	struct power_wait_status *s = &wait_status[lcore_id];
+	struct power_wait_status *s = RTE_LCORE_VAR_VALUE(wait_status);
+
 	uint32_t i, rc;
 
 	/* check if supported */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 185+ messages in thread

* RE: [PATCH v7 1/7] eal: add static per-lcore memory allocation facility
  2024-09-18  8:26                                                 ` [PATCH v7 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
@ 2024-09-18  9:23                                                   ` Konstantin Ananyev
  0 siblings, 0 replies; 185+ messages in thread
From: Konstantin Ananyev @ 2024-09-18  9:23 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob



> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small, frequently-accessed data structures, for which one instance
> should exist for each lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> 
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>

> 2.34.1
> 


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v7 0/7] Lcore variables
  2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
                                                                   ` (6 preceding siblings ...)
  2024-09-18  8:26                                                 ` [PATCH v7 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
@ 2024-09-18  9:30                                                 ` fengchengwen
  7 siblings, 0 replies; 185+ messages in thread
From: fengchengwen @ 2024-09-18  9:30 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: hofors, Morten Brørup, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Jerin Jacob

Series-acked-by: Chengwen Feng <fengchengwen@huawei.com>

On 2024/9/18 16:26, Mattias Rönnblom wrote:
> This patch set introduces a new API <rte_lcore_var.h> for static
> per-lcore id data allocation.
> 
> Please refer to the <rte_lcore_var.h> API documentation for both a
> rationale for this new API, and a comparison to the alternatives
> available.
> 
> The adoption of this API would affect many different DPDK modules, but
> the author updated only a few, mostly to serve as examples in this
> RFC, and to iron out some, but surely not all, wrinkles in the API.
> 
> The question on how to best allocate static per-lcore memory has been
> up several times on the dev mailing list, for example in the thread on
> "random: use per lcore state" RFC by Stephen Hemminger.
> 
> Lcore variables are surely not the answer to all your per-lcore-data
> needs, since it only allows for more-or-less static allocation. In the
> author's opinion, it does however provide a reasonably simple and
> clean and seemingly very much performant solution to a real problem.
> 
> Mattias Rönnblom (7):
>   eal: add static per-lcore memory allocation facility
>   eal: add lcore variable functional tests
>   eal: add lcore variable performance test
>   random: keep PRNG state in lcore variable
>   power: keep per-lcore state in lcore variable
>   service: keep per-lcore state in lcore variable
>   eal: keep per-lcore power intrinsics state in lcore variable
> 
>  MAINTAINERS                                   |   6 +
>  app/test/meson.build                          |   2 +
>  app/test/test_lcore_var.c                     | 436 ++++++++++++++++++
>  app/test/test_lcore_var_perf.c                | 257 +++++++++++
>  config/rte_config.h                           |   1 +
>  doc/api/doxy-api-index.md                     |   1 +
>  .../prog_guide/env_abstraction_layer.rst      |  45 +-
>  doc/guides/rel_notes/release_24_11.rst        |  14 +
>  lib/eal/common/eal_common_lcore_var.c         |  79 ++++
>  lib/eal/common/meson.build                    |   1 +
>  lib/eal/common/rte_random.c                   |  28 +-
>  lib/eal/common/rte_service.c                  | 117 ++---
>  lib/eal/include/meson.build                   |   1 +
>  lib/eal/include/rte_lcore_var.h               | 390 ++++++++++++++++
>  lib/eal/version.map                           |   2 +
>  lib/eal/x86/rte_power_intrinsics.c            |  17 +-
>  lib/power/rte_power_pmd_mgmt.c                |  35 +-
>  17 files changed, 1339 insertions(+), 93 deletions(-)
>  create mode 100644 app/test/test_lcore_var.c
>  create mode 100644 app/test/test_lcore_var_perf.c
>  create mode 100644 lib/eal/common/eal_common_lcore_var.c
>  create mode 100644 lib/eal/include/rte_lcore_var.h
> 


^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v3 3/7] eal: add lcore variable performance test
  2024-09-16 10:50                                             ` Mattias Rönnblom
@ 2024-09-18 10:04                                               ` Jerin Jacob
  0 siblings, 0 replies; 185+ messages in thread
From: Jerin Jacob @ 2024-09-18 10:04 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Mattias Rönnblom, dev, Morten Brørup,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Jerin Jacob

On Mon, Sep 16, 2024 at 4:20 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2024-09-13 13:23, Jerin Jacob wrote:
> > On Fri, Sep 13, 2024 at 12:17 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
> >>
> >> On 2024-09-12 17:11, Jerin Jacob wrote:
> >>> On Thu, Sep 12, 2024 at 6:50 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
> >>>>
> >>>> On 2024-09-12 15:09, Jerin Jacob wrote:
> >>>>> On Thu, Sep 12, 2024 at 2:34 PM Mattias Rönnblom
> >>>>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>>>
> >>>>>> Add basic micro benchmark for lcore variables, in an attempt to assure
> >>>>>> that the overhead isn't significantly greater than alternative
> >>>>>> approaches, in scenarios where the benefits aren't expected to show up
> >>>>>> (i.e., when plenty of cache is available compared to the working set
> >>>>>> size of the per-lcore data).
> >>>>>>
> >>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>>>> ---
> >>>>>>     app/test/meson.build           |   1 +
> >>>>>>     app/test/test_lcore_var_perf.c | 160 +++++++++++++++++++++++++++++++++
> >>>>>>     2 files changed, 161 insertions(+)
> >>>>>>     create mode 100644 app/test/test_lcore_var_perf.c
> >>>>>
> >>>>>
> >>>>>> +static double
> >>>>>> +benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
> >>>>>> +{
> >>>>>> +       uint64_t i;
> >>>>>> +       uint64_t start;
> >>>>>> +       uint64_t end;
> >>>>>> +       double latency;
> >>>>>> +
> >>>>>> +       init_fun();
> >>>>>> +
> >>>>>> +       start = rte_get_timer_cycles();
> >>>>>> +
> >>>>>> +       for (i = 0; i < ITERATIONS; i++)
> >>>>>> +               update_fun();
> >>>>>> +
> >>>>>> +       end = rte_get_timer_cycles();
> >>>>>
> >>>>> Use precise variant. rte_rdtsc_precise() or so to be accurate
> >>>>
> >>>> With 1e7 iterations, do you need rte_rdtsc_precise()? I suspect not.
> >>>
> >>> I was thinking in another way, with 1e7 iteration, the additional
> >>> barrier on precise will be amortized, and we get more _deterministic_
> >>> behavior e.s.p in case if we print cycles and if we need to catch
> >>> regressions.
> >>
> >> If you time a section of code which spends ~40000000 cycles, it doesn't
> >> matter if you add or remove a few cycles at the beginning and the end.
> >>
> >> The rte_rdtsc_precise() is both better (more precise in the sense of
> >> more serialization), and worse (because it's more costly, and thus more
> >> intrusive).
> >
> > We can calibrate the overhead to remove the cost.
> >
> What you are interested is primarily the impact of (instruction)
> throughput, not the latency of the sequence of instructions that must be
> retired in order to load the lcore variable values, when you switch from
> (say) lcore id-index static arrays to lcore variables in your module.
>
> Usually, there is not reason to make a distinction between latency and
> throughput in this context, but as you zoom into very short snippets of
> code being executed, the difference becomes relevant. For example,
> adding an div instruction won't necessarily add 12 cc to your program's
> execution time on a Zen 4, even though that is its latency. Rather, the
> effects may, depending on data dependencies and what other instructions
> are executed in parallel, be much smaller.
>
> So, one could argue the ILP you get with the loop is a feature, not a bug.
>
> With or without per-iteration latency measurements, these benchmark are
> not-very-useful at best, and misleading at worst. I will rework them to
> include more than a single module/lcore variable, which I think would be
> somewhat of an improvement.

OK. Module parameter will remove the compiler optimization and more accurate.
I was doing manual loop unrolling[1] in a trace test case(for small
inline functions)
Either way it fine. Thanks for the rework.

[1]
https://github.com/DPDK/dpdk/blob/main/app/test/test_trace_perf.c#L30


>
> Even better would have some real domain logic, instead of just a dummy
> multiplication.
>
> >>
> >> You can use rte_rdtsc_precise(), rte_rdtsc(), or gettimeofday(). It
> >> doesn't matter.
> >
> > Yes. In this setup and it is pretty inaccurate PER iteration. Please
> > refer to the below patch to see the difference.
> >
> > Patch 1: Make nanoseconds to cycles per iteration
> > ------------------------------------------------------------------
> >
> > diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
> > index ea1d7ba90b52..b8d25400f593 100644
> > --- a/app/test/test_lcore_var_perf.c
> > +++ b/app/test/test_lcore_var_perf.c
> > @@ -110,7 +110,7 @@ benchmark_access_method(void (*init_fun)(void),
> > void (*update_fun)(void))
> >
> >          end = rte_get_timer_cycles();
> >
> > -       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
> > +       latency = ((end - start)) / ITERATIONS;
> >
> >          return latency;
> >   }
> > @@ -137,8 +137,7 @@ test_lcore_var_access(void)
> >
> > -       printf("Latencies [ns/update]\n");
> > +       printf("Latencies [cycles/update]\n");
> >          printf("Thread-local storage  Static array  Lcore variables\n");
> > -       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> > -              sarray_latency * 1e9, lvar_latency * 1e9);
> > +       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
> > lvar_latency);
> >
> >          return TEST_SUCCESS;
> >   }
> >
> >
> > Patch 2: Change to precise with calibration
> > -----------------------------------------------------------
> >
> > diff --git a/app/test/test_lcore_var_perf.c b/app/test/test_lcore_var_perf.c
> > index ea1d7ba90b52..8142ecd56241 100644
> > --- a/app/test/test_lcore_var_perf.c
> > +++ b/app/test/test_lcore_var_perf.c
> > @@ -96,23 +96,28 @@ lvar_update(void)
> >   static double
> >   benchmark_access_method(void (*init_fun)(void), void (*update_fun)(void))
> >   {
> > -       uint64_t i;
> > +       double tsc_latency;
> > +       double latency;
> >          uint64_t start;
> >          uint64_t end;
> > -       double latency;
> > +       uint64_t i;
> >
> > -       init_fun();
> > +       /* calculate rte_rdtsc_precise overhead */
> > +       start = rte_rdtsc_precise();
> > +       end = rte_rdtsc_precise();
> > +       tsc_latency = (end - start);
> >
> > -       start = rte_get_timer_cycles();
> > +       init_fun();
> >
> > -       for (i = 0; i < ITERATIONS; i++)
> > +       latency = 0;
> > +       for (i = 0; i < ITERATIONS; i++) {
> > +               start = rte_rdtsc_precise();
> >                  update_fun();
> > +               end = rte_rdtsc_precise();
> > +               latency += (end - start) - tsc_latency;
> > +       }
> >
> > -       end = rte_get_timer_cycles();
> > -
> > -       latency = ((end - start) / (double)rte_get_timer_hz()) / ITERATIONS;
> > -
> > -       return latency;
> > +       return latency / (double)ITERATIONS;
> >   }
> >
> >   static int
> > @@ -135,10 +140,9 @@ test_lcore_var_access(void)
> >          sarray_latency = benchmark_access_method(sarray_init, sarray_update);
> >          lvar_latency = benchmark_access_method(lvar_init, lvar_update);
> >
> > -       printf("Latencies [ns/update]\n");
> > +       printf("Latencies [cycles/update]\n");
> >          printf("Thread-local storage  Static array  Lcore variables\n");
> > -       printf("%20.1f %13.1f %16.1f\n", tls_latency * 1e9,
> > -              sarray_latency * 1e9, lvar_latency * 1e9);
> > +       printf("%20.1f %13.1f %16.1f\n", tls_latency, sarray_latency,
> > lvar_latency);
> >
> >          return TEST_SUCCESS;
> >   }
> >
> > ARM N2 core with patch 1(aka current scheme)
> > -----------------------------------
> >
> >   + ------------------------------------------------------- +
> >   + Test Suite : lcore variable perf autotest
> >   + ------------------------------------------------------- +
> > Latencies [cycles/update]
> > Thread-local storage  Static array  Lcore variables
> >                   7.0           7.0              7.0
> >
> >
> > ARM N2 core with patch 2
> > -----------------------------------
> >
> >   + ------------------------------------------------------- +
> >   + Test Suite : lcore variable perf autotest
> >   + ------------------------------------------------------- +
> > Latencies [cycles/update]
> > Thread-local storage  Static array  Lcore variables
> >                  11.4          15.5             15.5
> >
> > x86 i9 core with patch 1(aka current scheme)
> > ------------------------------------------------------------
> >
> >   + ------------------------------------------------------- +
> >   + Test Suite : lcore variable perf autotest
> >   + ------------------------------------------------------- +
> > Latencies [ns/update]
> > Thread-local storage  Static array  Lcore variables
> >                   5.0           6.0              6.0
> >
> > x86 i9 core with patch 2
> > --------------------------------
> >   + ------------------------------------------------------- +
> >   + Test Suite : lcore variable perf autotest
> >   + ------------------------------------------------------- +
> > Latencies [cycles/update]
> > Thread-local storage  Static array  Lcore variables
> >                   5.3          10.6             11.7
> >
> >
> >
> >
> >
> >>
> >>> Furthermore, you may consider replacing rte_random() in fast path to
> >>> running number or so if it is not deterministic in cycle computation.
> >>
> >> rte_rand() is not used in the fast path. I don't understand what you
> >
> > I missed that. Ignore this comment.
> >
> >> mean by "running number".

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-12 15:22                                     ` Jerin Jacob
@ 2024-09-18 10:11                                       ` Jerin Jacob
  2024-09-19 19:31                                         ` Mattias Rönnblom
  0 siblings, 1 reply; 185+ messages in thread
From: Jerin Jacob @ 2024-09-18 10:11 UTC (permalink / raw)
  To: Morten Brørup
  Cc: Mattias Rönnblom, dev, Chengwen Feng, Mattias Rönnblom,
	Stephen Hemminger, Konstantin Ananyev, David Marchand,
	Anatoly Burakov

On Thu, Sep 12, 2024 at 8:52 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Thu, Sep 12, 2024 at 7:11 PM Morten Brørup <mb@smartsharesystems.com> wrote:
> >
> > > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > > Sent: Thursday, 12 September 2024 15.17
> > >
> > > On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
> > > wrote:
> > > >
> > > > > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> > > >
> > > > Considering hugepages...
> > > >
> > > > Lcore variables may be allocated before DPDK's memory allocator
> > > (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
> > > >
> > > > And lcore variables are not usable (shared) for DPDK multi-process, so the
> > > lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
> > > instead of using rte_malloc().
> > > >
> > > > The alternative, using rte_malloc(), would disallow allocating lcore
> > > variables before DPDK's memory allocator has been initialized, which I think
> > > is too late.
> > >
> > > I thought it is not. A lot of the subsystems are initialized after the
> > > memory subsystem is initialized.
> > > [1] example given in documentation. I thought, RTE_INIT needs to
> > > replaced if the subsystem called after memory initialized (which is
> > > the case for most of the libraries)
> >
> > The list of RTE_INIT functions are called before main(). It is not very useful.
> >
> > Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.
> >
> > DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.
> >
> > > Trace library had a similar situation. It is managed like [2]
> >
> > Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.
>
> I was not insisting on using ONLY rte_malloc(). Since rte_malloc() can
> be called before rte_eal_init)(it will return NULL). Alloc routine can
> check first rte_malloc() is available if not switch over glibc.


@Mattias Rönnblom This comment is not addressed in v7. Could you check?

^ permalink raw reply	[flat|nested] 185+ messages in thread

* Re: [PATCH v2 1/6] eal: add static per-lcore memory allocation facility
  2024-09-18 10:11                                       ` Jerin Jacob
@ 2024-09-19 19:31                                         ` Mattias Rönnblom
  0 siblings, 0 replies; 185+ messages in thread
From: Mattias Rönnblom @ 2024-09-19 19:31 UTC (permalink / raw)
  To: Jerin Jacob, Morten Brørup
  Cc: Mattias Rönnblom, dev, Chengwen Feng, Stephen Hemminger,
	Konstantin Ananyev, David Marchand, Anatoly Burakov

On 2024-09-18 12:11, Jerin Jacob wrote:
> On Thu, Sep 12, 2024 at 8:52 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>
>> On Thu, Sep 12, 2024 at 7:11 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>>>
>>>> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
>>>> Sent: Thursday, 12 September 2024 15.17
>>>>
>>>> On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
>>>> wrote:
>>>>>
>>>>>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>>>>>
>>>>> Considering hugepages...
>>>>>
>>>>> Lcore variables may be allocated before DPDK's memory allocator
>>>> (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
>>>>>
>>>>> And lcore variables are not usable (shared) for DPDK multi-process, so the
>>>> lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
>>>> instead of using rte_malloc().
>>>>>
>>>>> The alternative, using rte_malloc(), would disallow allocating lcore
>>>> variables before DPDK's memory allocator has been initialized, which I think
>>>> is too late.
>>>>
>>>> I thought it is not. A lot of the subsystems are initialized after the
>>>> memory subsystem is initialized.
>>>> [1] example given in documentation. I thought, RTE_INIT needs to
>>>> replaced if the subsystem called after memory initialized (which is
>>>> the case for most of the libraries)
>>>
>>> The list of RTE_INIT functions are called before main(). It is not very useful.
>>>
>>> Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.
>>>
>>> DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.
>>>
>>>> Trace library had a similar situation. It is managed like [2]
>>>
>>> Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.
>>
>> I was not insisting on using ONLY rte_malloc(). Since rte_malloc() can
>> be called before rte_eal_init)(it will return NULL). Alloc routine can
>> check first rte_malloc() is available if not switch over glibc.
> 
> 
> @Mattias Rönnblom This comment is not addressed in v7. Could you check?

Calling rte_malloc() and depending on it returning NULL if it's too 
early in the initialization process sounds a little fragile, but maybe 
it's fine.

One issue with lcore-variables-in-huge-pages I've failed to mentioned 
this time around this is being discussed is that it would increase 
memory usage by something like RTE_MAX_LCORE * 0.5 MB (or more probably 
a little more).

In the huge pages case, you can't rely on demand paging to avoid 
bringing in unused pages.

That said, I suspect some very latency-sensitive apps lock all pages in 
memory, and thus lose out on this OS feature.

I suggest we just leave the first incarnation of lcore variables in 
normal pages.

Thanks for the reminder.

^ permalink raw reply	[flat|nested] 185+ messages in thread

end of thread, other threads:[~2024-09-19 19:31 UTC | newest]

Thread overview: 185+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-08 18:16 [RFC 0/5] Lcore variables Mattias Rönnblom
2024-02-08 18:16 ` [RFC 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-02-09  8:25   ` Morten Brørup
2024-02-09 11:46     ` Mattias Rönnblom
2024-02-09 13:04       ` Morten Brørup
2024-02-19  7:49         ` Mattias Rönnblom
2024-02-19 11:10           ` Morten Brørup
2024-02-19 14:31             ` Mattias Rönnblom
2024-02-19 15:04               ` Morten Brørup
2024-02-19  9:40   ` [RFC v2 0/5] Lcore variables Mattias Rönnblom
2024-02-19  9:40     ` [RFC v2 1/5] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-02-20  8:49       ` [RFC v3 0/6] Lcore variables Mattias Rönnblom
2024-02-20  8:49         ` [RFC v3 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-02-20  9:11           ` Bruce Richardson
2024-02-20 10:47             ` Mattias Rönnblom
2024-02-20 11:39               ` Bruce Richardson
2024-02-20 13:37                 ` Morten Brørup
2024-02-20 16:26                 ` Mattias Rönnblom
2024-02-21  9:43           ` Jerin Jacob
2024-02-21 10:31             ` Morten Brørup
2024-02-21 14:26             ` Mattias Rönnblom
2024-02-22  9:22           ` Morten Brørup
2024-02-23 10:12             ` Mattias Rönnblom
2024-02-25 15:03           ` [RFC v4 0/6] Lcore variables Mattias Rönnblom
2024-02-25 15:03             ` [RFC v4 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-02-27  9:58               ` Morten Brørup
2024-02-27 13:44                 ` Mattias Rönnblom
2024-02-27 15:05                   ` Morten Brørup
2024-02-27 16:27                     ` Mattias Rönnblom
2024-02-27 16:51                       ` Morten Brørup
2024-02-28 10:09               ` [RFC v5 0/6] Lcore variables Mattias Rönnblom
2024-02-28 10:09                 ` [RFC v5 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-03-19 12:52                   ` Konstantin Ananyev
2024-03-20 10:24                     ` Mattias Rönnblom
2024-03-20 14:18                       ` Konstantin Ananyev
2024-05-06  8:27                   ` [RFC v6 0/6] Lcore variables Mattias Rönnblom
2024-05-06  8:27                     ` [RFC v6 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-10  7:03                       ` [PATCH 0/6] Lcore variables Mattias Rönnblom
2024-09-10  7:03                         ` [PATCH 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-10  9:32                           ` Morten Brørup
2024-09-10 10:44                             ` Mattias Rönnblom
2024-09-10 13:07                               ` Morten Brørup
2024-09-10 15:55                               ` Stephen Hemminger
2024-09-11 10:32                           ` Morten Brørup
2024-09-11 15:05                             ` Mattias Rönnblom
2024-09-11 15:07                               ` Morten Brørup
2024-09-11 17:04                           ` [PATCH v2 0/6] Lcore variables Mattias Rönnblom
2024-09-11 17:04                             ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-12  2:33                               ` fengchengwen
2024-09-12  5:35                                 ` Mattias Rönnblom
2024-09-12  7:05                                   ` fengchengwen
2024-09-12  7:28                                   ` Jerin Jacob
2024-09-12  8:44                               ` [PATCH v3 0/7] Lcore variables Mattias Rönnblom
2024-09-12  8:44                                 ` [PATCH v3 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-16 10:52                                   ` [PATCH v4 0/7] Lcore variables Mattias Rönnblom
2024-09-16 10:52                                     ` [PATCH v4 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-16 14:02                                       ` Konstantin Ananyev
2024-09-16 17:39                                         ` Morten Brørup
2024-09-16 23:19                                           ` Konstantin Ananyev
2024-09-17  7:12                                             ` Morten Brørup
2024-09-17  8:09                                               ` Konstantin Ananyev
2024-09-17 14:28                                         ` Mattias Rönnblom
2024-09-17 16:11                                           ` Konstantin Ananyev
2024-09-18  7:00                                             ` Mattias Rönnblom
2024-09-17 16:29                                           ` Konstantin Ananyev
2024-09-18  7:50                                             ` Mattias Rönnblom
2024-09-17 14:32                                       ` [PATCH v5 0/7] Lcore variables Mattias Rönnblom
2024-09-17 14:32                                         ` [PATCH v5 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-18  8:00                                           ` [PATCH v6 0/7] Lcore variables Mattias Rönnblom
2024-09-18  8:00                                             ` [PATCH v6 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-18  8:24                                               ` Konstantin Ananyev
2024-09-18  8:25                                                 ` Mattias Rönnblom
2024-09-18  8:26                                               ` [PATCH v7 0/7] Lcore variables Mattias Rönnblom
2024-09-18  8:26                                                 ` [PATCH v7 1/7] eal: add static per-lcore memory allocation facility Mattias Rönnblom
2024-09-18  9:23                                                   ` Konstantin Ananyev
2024-09-18  8:26                                                 ` [PATCH v7 2/7] eal: add lcore variable functional tests Mattias Rönnblom
2024-09-18  8:26                                                 ` [PATCH v7 3/7] eal: add lcore variable performance test Mattias Rönnblom
2024-09-18  8:26                                                 ` [PATCH v7 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-09-18  8:26                                                 ` [PATCH v7 5/7] power: keep per-lcore " Mattias Rönnblom
2024-09-18  8:26                                                 ` [PATCH v7 6/7] service: " Mattias Rönnblom
2024-09-18  8:26                                                 ` [PATCH v7 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-09-18  9:30                                                 ` [PATCH v7 0/7] Lcore variables fengchengwen
2024-09-18  8:00                                             ` [PATCH v6 2/7] eal: add lcore variable functional tests Mattias Rönnblom
2024-09-18  8:25                                               ` Konstantin Ananyev
2024-09-18  8:00                                             ` [PATCH v6 3/7] eal: add lcore variable performance test Mattias Rönnblom
2024-09-18  8:00                                             ` [PATCH v6 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-09-18  8:00                                             ` [PATCH v6 5/7] power: keep per-lcore " Mattias Rönnblom
2024-09-18  8:00                                             ` [PATCH v6 6/7] service: " Mattias Rönnblom
2024-09-18  8:00                                             ` [PATCH v6 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-09-17 14:32                                         ` [PATCH v5 2/7] eal: add lcore variable functional tests Mattias Rönnblom
2024-09-17 14:32                                         ` [PATCH v5 3/7] eal: add lcore variable performance test Mattias Rönnblom
2024-09-17 15:40                                           ` Morten Brørup
2024-09-18  6:05                                             ` Mattias Rönnblom
2024-09-17 14:32                                         ` [PATCH v5 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-09-17 14:32                                         ` [PATCH v5 5/7] power: keep per-lcore " Mattias Rönnblom
2024-09-17 14:32                                         ` [PATCH v5 6/7] service: " Mattias Rönnblom
2024-09-17 14:32                                         ` [PATCH v5 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-09-16 10:52                                     ` [PATCH v4 2/7] eal: add lcore variable functional tests Mattias Rönnblom
2024-09-16 10:52                                     ` [PATCH v4 3/7] eal: add lcore variable performance test Mattias Rönnblom
2024-09-16 11:13                                       ` Mattias Rönnblom
2024-09-16 11:54                                         ` Morten Brørup
2024-09-16 16:12                                           ` Mattias Rönnblom
2024-09-16 17:19                                             ` Morten Brørup
2024-09-16 10:52                                     ` [PATCH v4 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-09-16 16:11                                       ` Konstantin Ananyev
2024-09-16 10:52                                     ` [PATCH v4 5/7] power: keep per-lcore " Mattias Rönnblom
2024-09-16 16:12                                       ` Konstantin Ananyev
2024-09-16 10:52                                     ` [PATCH v4 6/7] service: " Mattias Rönnblom
2024-09-16 16:13                                       ` Konstantin Ananyev
2024-09-16 10:52                                     ` [PATCH v4 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-09-16 16:14                                       ` Konstantin Ananyev
2024-09-12  8:44                                 ` [PATCH v3 2/7] eal: add lcore variable functional tests Mattias Rönnblom
2024-09-12  8:44                                 ` [PATCH v3 3/7] eal: add lcore variable performance test Mattias Rönnblom
2024-09-12  9:39                                   ` Morten Brørup
2024-09-12 13:01                                     ` Mattias Rönnblom
2024-09-12 13:09                                   ` Jerin Jacob
2024-09-12 13:20                                     ` Mattias Rönnblom
2024-09-12 15:11                                       ` Jerin Jacob
2024-09-13  6:47                                         ` Mattias Rönnblom
2024-09-13 11:23                                           ` Jerin Jacob
2024-09-13 14:40                                             ` Morten Brørup
2024-09-16  8:12                                               ` Jerin Jacob
2024-09-16  9:51                                                 ` Morten Brørup
2024-09-16 10:50                                             ` Mattias Rönnblom
2024-09-18 10:04                                               ` Jerin Jacob
2024-09-12  8:44                                 ` [PATCH v3 4/7] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-09-12  8:44                                 ` [PATCH v3 5/7] power: keep per-lcore " Mattias Rönnblom
2024-09-12  8:44                                 ` [PATCH v3 6/7] service: " Mattias Rönnblom
2024-09-12  8:44                                 ` [PATCH v3 7/7] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-09-12  9:10                               ` [PATCH v2 1/6] eal: add static per-lcore memory allocation facility Morten Brørup
2024-09-12 13:16                                 ` Jerin Jacob
2024-09-12 13:41                                   ` Morten Brørup
2024-09-12 15:22                                     ` Jerin Jacob
2024-09-18 10:11                                       ` Jerin Jacob
2024-09-19 19:31                                         ` Mattias Rönnblom
2024-09-11 17:04                             ` [PATCH v2 2/6] eal: add lcore variable test suite Mattias Rönnblom
2024-09-12  7:35                               ` Jerin Jacob
2024-09-12  8:56                                 ` Mattias Rönnblom
2024-09-11 17:04                             ` [PATCH v2 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-09-11 17:04                             ` [PATCH v2 4/6] power: keep per-lcore " Mattias Rönnblom
2024-09-11 17:04                             ` [PATCH v2 5/6] service: " Mattias Rönnblom
2024-09-11 17:04                             ` [PATCH v2 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-09-10  7:03                         ` [PATCH 2/6] eal: add lcore variable test suite Mattias Rönnblom
2024-09-10  7:03                         ` [PATCH 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-09-10  7:03                         ` [PATCH 4/6] power: keep per-lcore " Mattias Rönnblom
2024-09-10  7:03                         ` [PATCH 5/6] service: " Mattias Rönnblom
2024-09-10  7:03                         ` [PATCH 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-05-06  8:27                     ` [RFC v6 2/6] eal: add lcore variable test suite Mattias Rönnblom
2024-05-06  8:27                     ` [RFC v6 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-05-06  8:27                     ` [RFC v6 4/6] power: keep per-lcore " Mattias Rönnblom
2024-05-06  8:27                     ` [RFC v6 5/6] service: " Mattias Rönnblom
2024-05-06  8:27                     ` [RFC v6 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-09-02 14:42                     ` [RFC v6 0/6] Lcore variables Morten Brørup
2024-09-10  6:41                       ` Mattias Rönnblom
2024-09-10 15:41                         ` Stephen Hemminger
2024-02-28 10:09                 ` [RFC v5 2/6] eal: add lcore variable test suite Mattias Rönnblom
2024-02-28 10:09                 ` [RFC v5 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-02-28 10:09                 ` [RFC v5 4/6] power: keep per-lcore " Mattias Rönnblom
2024-02-28 10:09                 ` [RFC v5 5/6] service: " Mattias Rönnblom
2024-02-28 10:09                 ` [RFC v5 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-02-25 15:03             ` [RFC v4 2/6] eal: add lcore variable test suite Mattias Rönnblom
2024-02-25 15:03             ` [RFC v4 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-02-25 15:03             ` [RFC v4 4/6] power: keep per-lcore " Mattias Rönnblom
2024-02-25 15:03             ` [RFC v4 5/6] service: " Mattias Rönnblom
2024-02-25 16:28               ` Mattias Rönnblom
2024-02-25 15:03             ` [RFC v4 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-02-20  8:49         ` [RFC v3 2/6] eal: add lcore variable test suite Mattias Rönnblom
2024-02-20  8:49         ` [RFC v3 3/6] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-02-20 15:31           ` Morten Brørup
2024-02-20  8:49         ` [RFC v3 4/6] power: keep per-lcore " Mattias Rönnblom
2024-02-20  8:49         ` [RFC v3 5/6] service: " Mattias Rönnblom
2024-02-22  9:42           ` Morten Brørup
2024-02-23 10:19             ` Mattias Rönnblom
2024-02-20  8:49         ` [RFC v3 6/6] eal: keep per-lcore power intrinsics " Mattias Rönnblom
2024-02-19  9:40     ` [RFC v2 2/5] eal: add lcore variable test suite Mattias Rönnblom
2024-02-19  9:40     ` [RFC v2 3/5] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-02-19 11:22       ` Morten Brørup
2024-02-19 14:04         ` Mattias Rönnblom
2024-02-19 15:10           ` Morten Brørup
2024-02-19  9:40     ` [RFC v2 4/5] power: keep per-lcore " Mattias Rönnblom
2024-02-19  9:40     ` [RFC v2 5/5] service: " Mattias Rönnblom
2024-02-08 18:16 ` [RFC 2/5] eal: add lcore variable test suite Mattias Rönnblom
2024-02-08 18:16 ` [RFC 3/5] random: keep PRNG state in lcore variable Mattias Rönnblom
2024-02-08 18:16 ` [RFC 4/5] power: keep per-lcore " Mattias Rönnblom
2024-02-08 18:16 ` [RFC 5/5] service: " Mattias Rönnblom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).