From: Paul Szczepanek <paul.szczepanek@arm.com>
To: dev@dpdk.org
Cc: bruce.richardson@intel.com,
Paul Szczepanek <paul.szczepanek@arm.com>,
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
Kamalakshitha Aligeri <kamalakshitha.aligeri@arm.com>,
Nathan Brown <nathan.brown@arm.com>
Subject: [PATCH v9 2/5] ptr_compress: add pointer compression library
Date: Mon, 11 Mar 2024 14:47:03 +0000 [thread overview]
Message-ID: <20240311144706.204831-3-paul.szczepanek@arm.com> (raw)
In-Reply-To: <20240311144706.204831-1-paul.szczepanek@arm.com>
Add a new utility header for compressing pointers. The provided
functions can store pointers in 32-bit or 16-bit offsets.
The compression takes advantage of the fact that pointers are
usually located in a limited memory region (like a mempool).
We can compress them by converting them to offsets from a base
memory address. Offsets can be stored in fewer bytes (dictated
by the memory region size and alignment of the pointer).
For example: an 8 byte aligned pointer which is part of a 32GB
memory pool can be stored in 4 bytes.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Paul Szczepanek <paul.szczepanek@arm.com>
Signed-off-by: Kamalakshitha Aligeri <kamalakshitha.aligeri@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Nathan Brown <nathan.brown@arm.com>
---
MAINTAINERS | 4 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/rel_notes/release_24_03.rst | 6 +
lib/meson.build | 1 +
lib/ptr_compress/meson.build | 4 +
lib/ptr_compress/rte_ptr_compress.h | 266 +++++++++++++++++++++++++
7 files changed, 283 insertions(+)
create mode 100644 lib/ptr_compress/meson.build
create mode 100644 lib/ptr_compress/rte_ptr_compress.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 4755a68274..6f703b1b13 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1685,6 +1685,10 @@ M: Chenbo Xia <chenbox@nvidia.com>
M: Gaetan Rivet <grive@u256.net>
F: lib/pci/
+Pointer Compression
+M: Paul Szczepanek <paul.szczepanek@arm.com>
+F: lib/ptr_compress/
+
Power management
M: Anatoly Burakov <anatoly.burakov@intel.com>
M: David Hunt <david.hunt@intel.com>
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 8c1eb8fafa..f9283154f8 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -222,6 +222,7 @@ The public API headers are grouped by topics:
[config file](@ref rte_cfgfile.h),
[key/value args](@ref rte_kvargs.h),
[argument parsing](@ref rte_argparse.h),
+ [ptr_compress](@ref rte_ptr_compress.h),
[string](@ref rte_string_fns.h),
[thread](@ref rte_thread.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 27afec8b3b..a8823c046f 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -71,6 +71,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/pipeline \
@TOPDIR@/lib/port \
@TOPDIR@/lib/power \
+ @TOPDIR@/lib/ptr_compress \
@TOPDIR@/lib/rawdev \
@TOPDIR@/lib/rcu \
@TOPDIR@/lib/regexdev \
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 932688ca4d..b82b8c5c0b 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -176,6 +176,12 @@ New Features
* Added power-saving during polling within the ``rte_event_dequeue_burst()`` API.
* Added support for DMA adapter.
+* **Introduced pointer compression library.**
+
+ Library provides functions to compress and decompress arrays of pointers
+ which can improve application performance under certain conditions.
+ Performance test was added to help users evaluate performance on their setup.
+
Removed Items
-------------
diff --git a/lib/meson.build b/lib/meson.build
index e4e31f7ecf..fe43d137d7 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -14,6 +14,7 @@ libraries = [
'argparse',
'telemetry', # basic info querying
'eal', # everything depends on eal
+ 'ptr_compress',
'ring',
'rcu', # rcu depends on ring
'mempool',
diff --git a/lib/ptr_compress/meson.build b/lib/ptr_compress/meson.build
new file mode 100644
index 0000000000..e92706a45f
--- /dev/null
+++ b/lib/ptr_compress/meson.build
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 Arm Limited
+
+headers = files('rte_ptr_compress.h')
diff --git a/lib/ptr_compress/rte_ptr_compress.h b/lib/ptr_compress/rte_ptr_compress.h
new file mode 100644
index 0000000000..97c084003d
--- /dev/null
+++ b/lib/ptr_compress/rte_ptr_compress.h
@@ -0,0 +1,266 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Arm Limited
+ */
+
+#ifndef RTE_PTR_COMPRESS_H
+#define RTE_PTR_COMPRESS_H
+
+/**
+ * @file
+ * Pointer compression and decompression functions.
+ *
+ * When passing arrays full of pointers between threads, memory containing
+ * the pointers is copied multiple times which is especially costly between
+ * cores. These functions allow us to compress the pointers.
+ *
+ * Compression takes advantage of the fact that pointers are usually located in
+ * a limited memory region (like a mempool). We compress them by converting them
+ * to offsets from a base memory address. Offsets can be stored in fewer bytes.
+ *
+ * The compression functions come in two varieties: 32-bit and 16-bit.
+ *
+ * To determine how many bits are needed to compress the pointer calculate
+ * the biggest offset possible (highest value pointer - base pointer)
+ * and shift the value right according to alignment (shift by exponent of the
+ * power of 2 of alignment: aligned by 4 - shift by 2, aligned by 8 - shift by
+ * 3, etc.). The resulting value must fit in either 32 or 16 bits.
+ *
+ * For usage example and further explanation please see "Pointer Compression" in
+ * doc/guides/prog_guide/ptr_compress_lib.rst
+ */
+
+#include <stdint.h>
+#include <inttypes.h>
+
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_vect.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Compress pointers into 32-bit offsets from base pointer.
+ *
+ * @note It is programmer's responsibility to ensure the resulting offsets fit
+ * into 32 bits. Alignment of the structures pointed to by the pointers allows
+ * us to drop bits from the offsets. This is controlled by the bit_shift
+ * parameter. This means that if structures are aligned by 8 bytes they must be
+ * within 32GB of the base pointer. If there is no such alignment guarantee they
+ * must be within 4GB.
+ *
+ * @param ptr_base
+ * A pointer used to calculate offsets of pointers in src_table.
+ * @param src_table
+ * A pointer to an array of pointers.
+ * @param dest_table
+ * A pointer to an array of compressed pointers returned by this function.
+ * @param n
+ * The number of objects to compress, must be strictly positive.
+ * @param bit_shift
+ * Byte alignment of memory pointed to by the pointers allows for
+ * bits to be dropped from the offset and hence widen the memory region that
+ * can be covered. This controls how many bits are right shifted.
+ **/
+static __rte_always_inline void
+rte_ptr_compress_32(void *ptr_base, void **src_table,
+ uint32_t *dest_table, size_t n, uint8_t bit_shift)
+{
+ unsigned int i = 0;
+#if defined RTE_HAS_SVE_ACLE && !defined RTE_ARCH_ARMv8_AARCH32
+ svuint64_t v_ptr_table;
+ svbool_t pg = svwhilelt_b64(i, n);
+ do {
+ v_ptr_table = svld1_u64(pg, (uint64_t *)src_table + i);
+ v_ptr_table = svsub_x(pg, v_ptr_table, (uint64_t)ptr_base);
+ v_ptr_table = svlsr_x(pg, v_ptr_table, bit_shift);
+ svst1w(pg, &dest_table[i], v_ptr_table);
+ i += svcntd();
+ pg = svwhilelt_b64(i, n);
+ } while (i < n);
+#elif defined __ARM_NEON && !defined RTE_ARCH_ARMv8_AARCH32
+ uint64_t ptr_diff;
+ uint64x2_t v_ptr_table;
+ /* right shift is done by left shifting by negative int */
+ int64x2_t v_shift = vdupq_n_s64(-bit_shift);
+ uint64x2_t v_ptr_base = vdupq_n_u64((uint64_t)ptr_base);
+ for (; i < (n & ~0x1); i += 2) {
+ v_ptr_table = vld1q_u64((const uint64_t *)src_table + i);
+ v_ptr_table = vsubq_u64(v_ptr_table, v_ptr_base);
+ v_ptr_table = vshlq_u64(v_ptr_table, v_shift);
+ vst1_u32(dest_table + i, vqmovn_u64(v_ptr_table));
+ }
+ /* process leftover single item in case of odd number of n */
+ if (unlikely(n & 0x1)) {
+ ptr_diff = RTE_PTR_DIFF(src_table[i], ptr_base);
+ dest_table[i] = (uint32_t) (ptr_diff >> bit_shift);
+ }
+#else
+ uintptr_t ptr_diff;
+ for (; i < n; i++) {
+ ptr_diff = RTE_PTR_DIFF(src_table[i], ptr_base);
+ ptr_diff = ptr_diff >> bit_shift;
+ RTE_ASSERT(ptr_diff <= UINT32_MAX);
+ dest_table[i] = (uint32_t) ptr_diff;
+ }
+#endif
+}
+
+/**
+ * Decompress pointers from 32-bit offsets from base pointer.
+ *
+ * @param ptr_base
+ * A pointer which was used to calculate offsets in src_table.
+ * @param src_table
+ * A pointer to an array to compressed pointers.
+ * @param dest_table
+ * A pointer to an array of decompressed pointers returned by this function.
+ * @param n
+ * The number of objects to decompress, must be strictly positive.
+ * @param bit_shift
+ * Byte alignment of memory pointed to by the pointers allows for
+ * bits to be dropped from the offset and hence widen the memory region that
+ * can be covered. This controls how many bits are left shifted when pointers
+ * are recovered from the offsets.
+ **/
+static __rte_always_inline void
+rte_ptr_decompress_32(void *ptr_base, uint32_t *src_table,
+ void **dest_table, size_t n, uint8_t bit_shift)
+{
+ unsigned int i = 0;
+#if defined RTE_HAS_SVE_ACLE && !defined RTE_ARCH_ARMv8_AARCH32
+ svuint64_t v_ptr_table;
+ svbool_t pg = svwhilelt_b64(i, n);
+ do {
+ v_ptr_table = svld1uw_u64(pg, &src_table[i]);
+ v_ptr_table = svlsl_x(pg, v_ptr_table, bit_shift);
+ v_ptr_table = svadd_x(pg, v_ptr_table, (uint64_t)ptr_base);
+ svst1(pg, (uint64_t *)dest_table + i, v_ptr_table);
+ i += svcntd();
+ pg = svwhilelt_b64(i, n);
+ } while (i < n);
+#elif defined __ARM_NEON && !defined RTE_ARCH_ARMv8_AARCH32
+ uint64_t ptr_diff;
+ uint64x2_t v_ptr_table;
+ int64x2_t v_shift = vdupq_n_s64(bit_shift);
+ uint64x2_t v_ptr_base = vdupq_n_u64((uint64_t)ptr_base);
+ for (; i < (n & ~0x1); i += 2) {
+ v_ptr_table = vmovl_u32(vld1_u32(src_table + i));
+ v_ptr_table = vshlq_u64(v_ptr_table, v_shift);
+ v_ptr_table = vaddq_u64(v_ptr_table, v_ptr_base);
+ vst1q_u64((uint64_t *)dest_table + i, v_ptr_table);
+ }
+ /* process leftover single item in case of odd number of n */
+ if (unlikely(n & 0x1)) {
+ ptr_diff = ((uint64_t) src_table[i]) << bit_shift;
+ dest_table[i] = RTE_PTR_ADD(ptr_base, ptr_diff);
+ }
+#else
+ uintptr_t ptr_diff;
+ for (; i < n; i++) {
+ ptr_diff = ((uintptr_t) src_table[i]) << bit_shift;
+ dest_table[i] = RTE_PTR_ADD(ptr_base, ptr_diff);
+ }
+#endif
+}
+
+/**
+ * Compress pointers into 16-bit offsets from base pointer.
+ *
+ * @note It is programmer's responsibility to ensure the resulting offsets fit
+ * into 16 bits. Alignment of the structures pointed to by the pointers allows
+ * us to drop bits from the offsets. This is controlled by the bit_shift
+ * parameter. This means that if structures are aligned by 8 bytes they must be
+ * within 256KB of the base pointer. If there is no such alignment guarantee
+ * they must be within 64KB.
+ *
+ * @param ptr_base
+ * A pointer used to calculate offsets of pointers in src_table.
+ * @param src_table
+ * A pointer to an array of pointers.
+ * @param dest_table
+ * A pointer to an array of compressed pointers returned by this function.
+ * @param n
+ * The number of objects to compress, must be strictly positive.
+ * @param bit_shift
+ * Byte alignment of memory pointed to by the pointers allows for
+ * bits to be dropped from the offset and hence widen the memory region that
+ * can be covered. This controls how many bits are right shifted.
+ **/
+static __rte_always_inline void
+rte_ptr_compress_16(void *ptr_base, void **src_table,
+ uint16_t *dest_table, size_t n, uint8_t bit_shift)
+{
+
+ unsigned int i = 0;
+#if defined RTE_HAS_SVE_ACLE && !defined RTE_ARCH_ARMv8_AARCH32
+ svuint64_t v_ptr_table;
+ svbool_t pg = svwhilelt_b64(i, n);
+ do {
+ v_ptr_table = svld1_u64(pg, (uint64_t *)src_table + i);
+ v_ptr_table = svsub_x(pg, v_ptr_table, (uint64_t)ptr_base);
+ v_ptr_table = svlsr_x(pg, v_ptr_table, bit_shift);
+ svst1h(pg, &dest_table[i], v_ptr_table);
+ i += svcntd();
+ pg = svwhilelt_b64(i, n);
+ } while (i < n);
+#else
+ uintptr_t ptr_diff;
+ for (; i < n; i++) {
+ ptr_diff = RTE_PTR_DIFF(src_table[i], ptr_base);
+ ptr_diff = ptr_diff >> bit_shift;
+ RTE_ASSERT(ptr_diff <= UINT16_MAX);
+ dest_table[i] = (uint16_t) ptr_diff;
+ }
+#endif
+}
+
+/**
+ * Decompress pointers from 16-bit offsets from base pointer.
+ *
+ * @param ptr_base
+ * A pointer which was used to calculate offsets in src_table.
+ * @param src_table
+ * A pointer to an array to compressed pointers.
+ * @param dest_table
+ * A pointer to an array of decompressed pointers returned by this function.
+ * @param n
+ * The number of objects to decompress, must be strictly positive.
+ * @param bit_shift
+ * Byte alignment of memory pointed to by the pointers allows for
+ * bits to be dropped from the offset and hence widen the memory region that
+ * can be covered. This controls how many bits are left shifted when pointers
+ * are recovered from the offsets.
+ **/
+static __rte_always_inline void
+rte_ptr_decompress_16(void *ptr_base, uint16_t *src_table,
+ void **dest_table, size_t n, uint8_t bit_shift)
+{
+ unsigned int i = 0;
+#if defined RTE_HAS_SVE_ACLE && !defined RTE_ARCH_ARMv8_AARCH32
+ svuint64_t v_ptr_table;
+ svbool_t pg = svwhilelt_b64(i, n);
+ do {
+ v_ptr_table = svld1uh_u64(pg, &src_table[i]);
+ v_ptr_table = svlsl_x(pg, v_ptr_table, bit_shift);
+ v_ptr_table = svadd_x(pg, v_ptr_table, (uint64_t)ptr_base);
+ svst1(pg, (uint64_t *)dest_table + i, v_ptr_table);
+ i += svcntd();
+ pg = svwhilelt_b64(i, n);
+ } while (i < n);
+#else
+ uintptr_t ptr_diff;
+ for (; i < n; i++) {
+ ptr_diff = ((uintptr_t) src_table[i]) << bit_shift;
+ dest_table[i] = RTE_PTR_ADD(ptr_base, ptr_diff);
+ }
+#endif
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_PTR_COMPRESS_H */
--
2.25.1
next prev parent reply other threads:[~2024-03-11 14:47 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-27 15:08 [RFC 0/2] add pointer compression API Paul Szczepanek
2023-09-27 15:08 ` [RFC 1/2] eal: add pointer compression functions Paul Szczepanek
2023-10-09 15:54 ` Thomas Monjalon
2023-10-11 13:36 ` Honnappa Nagarahalli
2023-10-11 16:43 ` Paul Szczepanek
2023-10-11 12:43 ` [RFC v2 0/2] add pointer compression API Paul Szczepanek
2023-10-11 12:43 ` [RFC v2 1/2] eal: add pointer compression functions Paul Szczepanek
2023-10-11 12:43 ` [RFC v2 2/2] test: add pointer compress tests to ring perf test Paul Szczepanek
2023-10-31 18:10 ` [PATCH v3 0/3] add pointer compression API Paul Szczepanek
2023-10-31 18:10 ` [PATCH v3 1/3] eal: add pointer compression functions Paul Szczepanek
2023-10-31 18:10 ` [PATCH v3 2/3] test: add pointer compress tests to ring perf test Paul Szczepanek
2023-10-31 18:10 ` [PATCH v3 3/3] docs: add pointer compression to the EAL guide Paul Szczepanek
2023-11-01 7:42 ` [PATCH v3 0/3] add pointer compression API Morten Brørup
2023-11-01 12:52 ` Paul Szczepanek
2023-11-01 12:46 ` [PATCH v4 0/4] " Paul Szczepanek
2023-11-01 12:46 ` [PATCH v4 1/4] eal: add pointer compression functions Paul Szczepanek
2023-11-01 12:46 ` [PATCH v4 2/4] test: add pointer compress tests to ring perf test Paul Szczepanek
2023-11-01 12:46 ` [PATCH v4 3/4] docs: add pointer compression to the EAL guide Paul Szczepanek
2023-11-01 12:46 ` [PATCH v4 4/4] test: add unit test for ptr compression Paul Szczepanek
2023-11-01 18:12 ` [PATCH v5 0/4] add pointer compression API Paul Szczepanek
2023-11-01 18:12 ` [PATCH v5 1/4] eal: add pointer compression functions Paul Szczepanek
2024-02-11 15:32 ` Konstantin Ananyev
2023-11-01 18:12 ` [PATCH v5 2/4] test: add pointer compress tests to ring perf test Paul Szczepanek
2023-11-01 18:13 ` [PATCH v5 3/4] docs: add pointer compression to the EAL guide Paul Szczepanek
2023-11-01 18:13 ` [PATCH v5 4/4] test: add unit test for ptr compression Paul Szczepanek
2024-02-22 8:15 ` [PATCH v5 0/4] add pointer compression API Paul Szczepanek
2024-02-22 16:16 ` Konstantin Ananyev
2024-03-01 11:16 ` Morten Brørup
2024-03-01 16:12 ` Patrick Robb
2024-03-01 19:57 ` Honnappa Nagarahalli
2024-03-02 10:33 ` Morten Brørup
2024-03-06 22:31 ` Paul Szczepanek
2024-03-07 2:13 ` Honnappa Nagarahalli
2024-03-04 14:44 ` Konstantin Ananyev
2024-02-29 16:03 ` [PATCH v6 " Paul Szczepanek
2024-02-29 16:03 ` [PATCH v6 1/4] eal: add pointer compression functions Paul Szczepanek
2024-02-29 16:03 ` [PATCH v6 2/4] test: add pointer compress tests to ring perf test Paul Szczepanek
2024-02-29 16:03 ` [PATCH v6 3/4] docs: add pointer compression to the EAL guide Paul Szczepanek
2024-02-29 16:03 ` [PATCH v6 4/4] test: add unit test for ptr compression Paul Szczepanek
2024-03-01 10:21 ` [PATCH v7 0/4] add pointer compression API Paul Szczepanek
2024-03-01 10:21 ` [PATCH v7 1/4] eal: add pointer compression functions Paul Szczepanek
2024-03-07 11:22 ` David Marchand
2024-03-01 10:21 ` [PATCH v7 2/4] test: add pointer compress tests to ring perf test Paul Szczepanek
2024-03-07 11:27 ` David Marchand
2024-03-01 10:21 ` [PATCH v7 3/4] docs: add pointer compression to the EAL guide Paul Szczepanek
2024-03-01 10:21 ` [PATCH v7 4/4] test: add unit test for ptr compression Paul Szczepanek
2024-03-07 11:30 ` David Marchand
2024-03-07 20:39 ` [PATCH v7 0/4] add pointer compression API Paul Szczepanek
2024-03-07 20:39 ` [PATCH v8 1/4] ptr_compress: add pointer compression library Paul Szczepanek
2024-03-07 20:39 ` [PATCH v8 2/4] test: add pointer compress tests to ring perf test Paul Szczepanek
2024-03-07 20:39 ` [PATCH v8 3/4] docs: add pointer compression guide Paul Szczepanek
2024-03-07 20:39 ` [PATCH v8 4/4] test: add unit test for ptr compression Paul Szczepanek
2024-03-08 8:27 ` [PATCH v7 0/4] add pointer compression API David Marchand
2024-03-10 19:34 ` Honnappa Nagarahalli
2024-03-11 7:44 ` David Marchand
2024-03-11 14:47 ` [PATCH v9 0/5] " Paul Szczepanek
2024-03-11 14:47 ` [PATCH v9 1/5] lib: allow libraries with no sources Paul Szczepanek
2024-03-11 15:23 ` Bruce Richardson
2024-03-15 8:33 ` Paul Szczepanek
2024-03-11 14:47 ` Paul Szczepanek [this message]
2024-03-11 14:47 ` [PATCH v9 3/5] test: add pointer compress tests to ring perf test Paul Szczepanek
2024-03-11 14:47 ` [PATCH v9 4/5] docs: add pointer compression guide Paul Szczepanek
2024-03-11 14:47 ` [PATCH v9 5/5] test: add unit test for ptr compression Paul Szczepanek
2024-03-11 20:31 ` [PATCH v10 0/5] add pointer compression API Paul Szczepanek
2024-03-11 20:31 ` [PATCH v10 1/5] lib: allow libraries with no sources Paul Szczepanek
2024-03-15 9:14 ` Bruce Richardson
2024-03-11 20:31 ` [PATCH v10 2/5] ptr_compress: add pointer compression library Paul Szczepanek
2024-03-11 20:31 ` [PATCH v10 3/5] test: add pointer compress tests to ring perf test Paul Szczepanek
2024-03-11 20:31 ` [PATCH v10 4/5] docs: add pointer compression guide Paul Szczepanek
2024-03-11 20:31 ` [PATCH v10 5/5] test: add unit test for ptr compression Paul Szczepanek
2023-09-27 15:08 ` [RFC 2/2] test: add pointer compress tests to ring perf test Paul Szczepanek
2023-10-09 15:48 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240311144706.204831-3-paul.szczepanek@arm.com \
--to=paul.szczepanek@arm.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=honnappa.nagarahalli@arm.com \
--cc=kamalakshitha.aligeri@arm.com \
--cc=nathan.brown@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).