From: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
To: <dev@dpdk.org>
Cc: Matan Azrad <matan@oss.nvidia.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
Olivier Matz <olivier.matz@6wind.com>,
"Ray Kinsella" <mdr@ashroe.eu>
Subject: [dpdk-dev] [PATCH v8 1/4] mempool: add event callbacks
Date: Mon, 18 Oct 2021 17:40:55 +0300 [thread overview]
Message-ID: <20211018144059.3303406-2-dkozlyuk@nvidia.com> (raw)
In-Reply-To: <20211018144059.3303406-1-dkozlyuk@nvidia.com>
Data path performance can benefit if the PMD knows which memory it will
need to handle in advance, before the first mbuf is sent to the PMD.
It is impractical, however, to consider all allocated memory for this
purpose. Most often mbuf memory comes from mempools that can come and
go. PMD can enumerate existing mempools on device start, but it also
needs to track creation and destruction of mempools after the forwarding
starts but before an mbuf from the new mempool is sent to the device.
Add an API to register callback for mempool life cycle events:
* rte_mempool_event_callback_register()
* rte_mempool_event_callback_unregister()
Currently tracked events are:
* RTE_MEMPOOL_EVENT_READY (after populating a mempool)
* RTE_MEMPOOL_EVENT_DESTROY (before freeing a mempool)
Provide a unit test for the new API.
The new API is internal, because it is primarily demanded by PMDs that
may need to deal with any mempools and do not control their creation,
while an application, on the other hand, knows which mempools it creates
and doesn't care about internal mempools PMDs might create.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
app/test/test_mempool.c | 248 ++++++++++++++++++++++++++++++++++++++
lib/mempool/rte_mempool.c | 124 +++++++++++++++++++
lib/mempool/rte_mempool.h | 62 ++++++++++
lib/mempool/version.map | 8 ++
4 files changed, 442 insertions(+)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 66bc8d86b7..c39c83256e 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -14,6 +14,7 @@
#include <rte_common.h>
#include <rte_log.h>
#include <rte_debug.h>
+#include <rte_errno.h>
#include <rte_memory.h>
#include <rte_launch.h>
#include <rte_cycles.h>
@@ -489,6 +490,245 @@ test_mp_mem_init(struct rte_mempool *mp,
data->ret = 0;
}
+struct test_mempool_events_data {
+ struct rte_mempool *mp;
+ enum rte_mempool_event event;
+ bool invoked;
+};
+
+static void
+test_mempool_events_cb(enum rte_mempool_event event,
+ struct rte_mempool *mp, void *user_data)
+{
+ struct test_mempool_events_data *data = user_data;
+
+ data->mp = mp;
+ data->event = event;
+ data->invoked = true;
+}
+
+static int
+test_mempool_events(int (*populate)(struct rte_mempool *mp))
+{
+#pragma push_macro("RTE_TEST_TRACE_FAILURE")
+#undef RTE_TEST_TRACE_FAILURE
+#define RTE_TEST_TRACE_FAILURE(...) do { goto fail; } while (0)
+
+ static const size_t CB_NUM = 3;
+ static const size_t MP_NUM = 2;
+
+ struct test_mempool_events_data data[CB_NUM];
+ struct rte_mempool *mp[MP_NUM], *freed;
+ char name[RTE_MEMPOOL_NAMESIZE];
+ size_t i, j;
+ int ret;
+
+ memset(mp, 0, sizeof(mp));
+ for (i = 0; i < CB_NUM; i++) {
+ ret = rte_mempool_event_callback_register
+ (test_mempool_events_cb, &data[i]);
+ RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to register the callback %zu: %s",
+ i, rte_strerror(rte_errno));
+ }
+ ret = rte_mempool_event_callback_unregister(test_mempool_events_cb, mp);
+ RTE_TEST_ASSERT_NOT_EQUAL(ret, 0, "Unregistered a non-registered callback");
+ /* NULL argument has no special meaning in this API. */
+ ret = rte_mempool_event_callback_unregister(test_mempool_events_cb,
+ NULL);
+ RTE_TEST_ASSERT_NOT_EQUAL(ret, 0, "Unregistered a non-registered callback with NULL argument");
+
+ /* Create mempool 0 that will be observed by all callbacks. */
+ memset(&data, 0, sizeof(data));
+ strcpy(name, "empty0");
+ mp[0] = rte_mempool_create_empty(name, MEMPOOL_SIZE,
+ MEMPOOL_ELT_SIZE, 0, 0,
+ SOCKET_ID_ANY, 0);
+ RTE_TEST_ASSERT_NOT_NULL(mp[0], "Cannot create mempool %s: %s",
+ name, rte_strerror(rte_errno));
+ for (j = 0; j < CB_NUM; j++)
+ RTE_TEST_ASSERT_EQUAL(data[j].invoked, false,
+ "Callback %zu invoked on %s mempool creation",
+ j, name);
+
+ rte_mempool_set_ops_byname(mp[0], rte_mbuf_best_mempool_ops(), NULL);
+ ret = populate(mp[0]);
+ RTE_TEST_ASSERT_EQUAL(ret, (int)mp[0]->size, "Failed to populate mempool %s: %s",
+ name, rte_strerror(rte_errno));
+ for (j = 0; j < CB_NUM; j++) {
+ RTE_TEST_ASSERT_EQUAL(data[j].invoked, true,
+ "Callback %zu not invoked on mempool %s population",
+ j, name);
+ RTE_TEST_ASSERT_EQUAL(data[j].event,
+ RTE_MEMPOOL_EVENT_READY,
+ "Wrong callback invoked, expected READY");
+ RTE_TEST_ASSERT_EQUAL(data[j].mp, mp[0],
+ "Callback %zu invoked for a wrong mempool instead of %s",
+ j, name);
+ }
+
+ /* Check that unregistered callback 0 observes no events. */
+ ret = rte_mempool_event_callback_unregister(test_mempool_events_cb,
+ &data[0]);
+ RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to unregister callback 0: %s",
+ rte_strerror(rte_errno));
+ memset(&data, 0, sizeof(data));
+ strcpy(name, "empty1");
+ mp[1] = rte_mempool_create_empty(name, MEMPOOL_SIZE,
+ MEMPOOL_ELT_SIZE, 0, 0,
+ SOCKET_ID_ANY, 0);
+ RTE_TEST_ASSERT_NOT_NULL(mp[1], "Cannot create mempool %s: %s",
+ name, rte_strerror(rte_errno));
+ rte_mempool_set_ops_byname(mp[1], rte_mbuf_best_mempool_ops(), NULL);
+ ret = populate(mp[1]);
+ RTE_TEST_ASSERT_EQUAL(ret, (int)mp[1]->size, "Failed to populate mempool %s: %s",
+ name, rte_strerror(rte_errno));
+ RTE_TEST_ASSERT_EQUAL(data[0].invoked, false,
+ "Unregistered callback 0 invoked on %s mempool populaton",
+ name);
+
+ for (i = 0; i < MP_NUM; i++) {
+ memset(&data, 0, sizeof(data));
+ sprintf(name, "empty%zu", i);
+ rte_mempool_free(mp[i]);
+ /*
+ * Save pointer to check that it was passed to the callback,
+ * but put NULL into the array in case cleanup is called early.
+ */
+ freed = mp[i];
+ mp[i] = NULL;
+ for (j = 1; j < CB_NUM; j++) {
+ RTE_TEST_ASSERT_EQUAL(data[j].invoked, true,
+ "Callback %zu not invoked on mempool %s destruction",
+ j, name);
+ RTE_TEST_ASSERT_EQUAL(data[j].event,
+ RTE_MEMPOOL_EVENT_DESTROY,
+ "Wrong callback invoked, expected DESTROY");
+ RTE_TEST_ASSERT_EQUAL(data[j].mp, freed,
+ "Callback %zu invoked for a wrong mempool instead of %s",
+ j, name);
+ }
+ RTE_TEST_ASSERT_EQUAL(data[0].invoked, false,
+ "Unregistered callback 0 invoked on %s mempool destruction",
+ name);
+ }
+
+ for (j = 1; j < CB_NUM; j++) {
+ ret = rte_mempool_event_callback_unregister
+ (test_mempool_events_cb, &data[j]);
+ RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to unregister the callback %zu: %s",
+ j, rte_strerror(rte_errno));
+ }
+ return TEST_SUCCESS;
+
+fail:
+ for (j = 0; j < CB_NUM; j++)
+ rte_mempool_event_callback_unregister
+ (test_mempool_events_cb, &data[j]);
+ for (i = 0; i < MP_NUM; i++)
+ rte_mempool_free(mp[i]);
+ return TEST_FAILED;
+
+#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
+}
+
+struct test_mempool_events_safety_data {
+ bool invoked;
+ int (*api_func)(rte_mempool_event_callback *func, void *user_data);
+ rte_mempool_event_callback *cb_func;
+ void *cb_user_data;
+ int ret;
+};
+
+static void
+test_mempool_events_safety_cb(enum rte_mempool_event event,
+ struct rte_mempool *mp, void *user_data)
+{
+ struct test_mempool_events_safety_data *data = user_data;
+
+ RTE_SET_USED(event);
+ RTE_SET_USED(mp);
+ data->invoked = true;
+ data->ret = data->api_func(data->cb_func, data->cb_user_data);
+}
+
+static int
+test_mempool_events_safety(void)
+{
+#pragma push_macro("RTE_TEST_TRACE_FAILURE")
+#undef RTE_TEST_TRACE_FAILURE
+#define RTE_TEST_TRACE_FAILURE(...) do { \
+ ret = TEST_FAILED; \
+ goto exit; \
+ } while (0)
+
+ struct test_mempool_events_data data;
+ struct test_mempool_events_safety_data sdata[2];
+ struct rte_mempool *mp;
+ size_t i;
+ int ret;
+
+ /* removes itself */
+ sdata[0].api_func = rte_mempool_event_callback_unregister;
+ sdata[0].cb_func = test_mempool_events_safety_cb;
+ sdata[0].cb_user_data = &sdata[0];
+ sdata[0].ret = -1;
+ rte_mempool_event_callback_register(test_mempool_events_safety_cb,
+ &sdata[0]);
+ /* inserts a callback after itself */
+ sdata[1].api_func = rte_mempool_event_callback_register;
+ sdata[1].cb_func = test_mempool_events_cb;
+ sdata[1].cb_user_data = &data;
+ sdata[1].ret = -1;
+ rte_mempool_event_callback_register(test_mempool_events_safety_cb,
+ &sdata[1]);
+
+ mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+ MEMPOOL_ELT_SIZE, 0, 0,
+ SOCKET_ID_ANY, 0);
+ RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+ rte_strerror(rte_errno));
+ memset(&data, 0, sizeof(data));
+ ret = rte_mempool_populate_default(mp);
+ RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
+ rte_strerror(rte_errno));
+
+ RTE_TEST_ASSERT_EQUAL(sdata[0].ret, 0, "Callback failed to unregister itself: %s",
+ rte_strerror(rte_errno));
+ RTE_TEST_ASSERT_EQUAL(sdata[1].ret, 0, "Failed to insert a new callback: %s",
+ rte_strerror(rte_errno));
+ RTE_TEST_ASSERT_EQUAL(data.invoked, false,
+ "Inserted callback is invoked on mempool population");
+
+ memset(&data, 0, sizeof(data));
+ sdata[0].invoked = false;
+ rte_mempool_free(mp);
+ mp = NULL;
+ RTE_TEST_ASSERT_EQUAL(sdata[0].invoked, false,
+ "Callback that unregistered itself was called");
+ RTE_TEST_ASSERT_EQUAL(sdata[1].ret, -EEXIST,
+ "New callback inserted twice");
+ RTE_TEST_ASSERT_EQUAL(data.invoked, true,
+ "Inserted callback is not invoked on mempool destruction");
+
+ rte_mempool_event_callback_unregister(test_mempool_events_cb, &data);
+ for (i = 0; i < RTE_DIM(sdata); i++)
+ rte_mempool_event_callback_unregister
+ (test_mempool_events_safety_cb, &sdata[i]);
+ ret = TEST_SUCCESS;
+
+exit:
+ /* cleanup, don't care which callbacks are already removed */
+ rte_mempool_event_callback_unregister(test_mempool_events_cb, &data);
+ for (i = 0; i < RTE_DIM(sdata); i++)
+ rte_mempool_event_callback_unregister
+ (test_mempool_events_safety_cb, &sdata[i]);
+ /* in case of failure before the planned destruction */
+ rte_mempool_free(mp);
+ return ret;
+
+#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
+}
+
static int
test_mempool(void)
{
@@ -666,6 +906,14 @@ test_mempool(void)
if (test_mempool_basic(default_pool, 1) < 0)
GOTO_ERR(ret, err);
+ /* test mempool event callbacks */
+ if (test_mempool_events(rte_mempool_populate_default) < 0)
+ GOTO_ERR(ret, err);
+ if (test_mempool_events(rte_mempool_populate_anon) < 0)
+ GOTO_ERR(ret, err);
+ if (test_mempool_events_safety() < 0)
+ GOTO_ERR(ret, err);
+
rte_mempool_list_dump(stdout);
ret = 0;
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 607419ccaf..8810d08ab5 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -42,6 +42,18 @@ static struct rte_tailq_elem rte_mempool_tailq = {
};
EAL_REGISTER_TAILQ(rte_mempool_tailq)
+TAILQ_HEAD(mempool_callback_list, rte_tailq_entry);
+
+static struct rte_tailq_elem callback_tailq = {
+ .name = "RTE_MEMPOOL_CALLBACK",
+};
+EAL_REGISTER_TAILQ(callback_tailq)
+
+/* Invoke all registered mempool event callbacks. */
+static void
+mempool_event_callback_invoke(enum rte_mempool_event event,
+ struct rte_mempool *mp);
+
#define CACHE_FLUSHTHRESH_MULTIPLIER 1.5
#define CALC_CACHE_FLUSHTHRESH(c) \
((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER))
@@ -360,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
mp->nb_mem_chunks++;
+ /* Report the mempool as ready only when fully populated. */
+ if (mp->populated_size >= mp->size)
+ mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp);
+
rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
return i;
@@ -722,6 +738,7 @@ rte_mempool_free(struct rte_mempool *mp)
}
rte_mcfg_tailq_write_unlock();
+ mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_DESTROY, mp);
rte_mempool_trace_free(mp);
rte_mempool_free_memchunks(mp);
rte_mempool_ops_free(mp);
@@ -1356,3 +1373,110 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
rte_mcfg_mempool_read_unlock();
}
+
+struct mempool_callback_data {
+ rte_mempool_event_callback *func;
+ void *user_data;
+};
+
+static void
+mempool_event_callback_invoke(enum rte_mempool_event event,
+ struct rte_mempool *mp)
+{
+ struct mempool_callback_list *list;
+ struct rte_tailq_entry *te;
+ void *tmp_te;
+
+ rte_mcfg_tailq_read_lock();
+ list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list);
+ RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) {
+ struct mempool_callback_data *cb = te->data;
+ rte_mcfg_tailq_read_unlock();
+ cb->func(event, mp, cb->user_data);
+ rte_mcfg_tailq_read_lock();
+ }
+ rte_mcfg_tailq_read_unlock();
+}
+
+int
+rte_mempool_event_callback_register(rte_mempool_event_callback *func,
+ void *user_data)
+{
+ struct mempool_callback_list *list;
+ struct rte_tailq_entry *te = NULL;
+ struct mempool_callback_data *cb;
+ void *tmp_te;
+ int ret;
+
+ if (func == NULL) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+
+ rte_mcfg_tailq_write_lock();
+ list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list);
+ RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) {
+ cb = te->data;
+ if (cb->func == func && cb->user_data == user_data) {
+ ret = -EEXIST;
+ goto exit;
+ }
+ }
+
+ te = rte_zmalloc("mempool_cb_tail_entry", sizeof(*te), 0);
+ if (te == NULL) {
+ RTE_LOG(ERR, MEMPOOL,
+ "Cannot allocate event callback tailq entry!\n");
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ cb = rte_malloc("mempool_cb_data", sizeof(*cb), 0);
+ if (cb == NULL) {
+ RTE_LOG(ERR, MEMPOOL,
+ "Cannot allocate event callback!\n");
+ rte_free(te);
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ cb->func = func;
+ cb->user_data = user_data;
+ te->data = cb;
+ TAILQ_INSERT_TAIL(list, te, next);
+ ret = 0;
+
+exit:
+ rte_mcfg_tailq_write_unlock();
+ rte_errno = -ret;
+ return ret;
+}
+
+int
+rte_mempool_event_callback_unregister(rte_mempool_event_callback *func,
+ void *user_data)
+{
+ struct mempool_callback_list *list;
+ struct rte_tailq_entry *te = NULL;
+ struct mempool_callback_data *cb;
+ int ret = -ENOENT;
+
+ rte_mcfg_tailq_write_lock();
+ list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list);
+ TAILQ_FOREACH(te, list, next) {
+ cb = te->data;
+ if (cb->func == func && cb->user_data == user_data) {
+ TAILQ_REMOVE(list, te, next);
+ ret = 0;
+ break;
+ }
+ }
+ rte_mcfg_tailq_write_unlock();
+
+ if (ret == 0) {
+ rte_free(te);
+ rte_free(cb);
+ }
+ rte_errno = -ret;
+ return ret;
+}
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 88bcbc51ef..5799d4a705 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1769,6 +1769,68 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg),
int
rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz);
+/**
+ * Mempool event type.
+ * @internal
+ */
+enum rte_mempool_event {
+ /** Occurs after a mempool is fully populated. */
+ RTE_MEMPOOL_EVENT_READY = 0,
+ /** Occurs before the destruction of a mempool begins. */
+ RTE_MEMPOOL_EVENT_DESTROY = 1,
+};
+
+/**
+ * @internal
+ * Mempool event callback.
+ *
+ * rte_mempool_event_callback_register() may be called from within the callback,
+ * but the callbacks registered this way will not be invoked for the same event.
+ * rte_mempool_event_callback_unregister() may only be safely called
+ * to remove the running callback.
+ */
+typedef void (rte_mempool_event_callback)(
+ enum rte_mempool_event event,
+ struct rte_mempool *mp,
+ void *user_data);
+
+/**
+ * @internal
+ * Register a callback function invoked on mempool life cycle event.
+ * The function will be invoked in the process
+ * that performs an action which triggers the callback.
+ *
+ * @param func
+ * Callback function.
+ * @param user_data
+ * User data.
+ *
+ * @return
+ * 0 on success, negative on failure and rte_errno is set.
+ */
+__rte_internal
+int
+rte_mempool_event_callback_register(rte_mempool_event_callback *func,
+ void *user_data);
+
+/**
+ * @internal
+ * Unregister a callback added with rte_mempool_event_callback_register().
+ * @p func and @p user_data must exactly match registration parameters.
+ *
+ * @param func
+ * Callback function.
+ * @param user_data
+ * User data.
+ *
+ * @return
+ * 0 on success, negative on failure and rte_errno is set.
+ */
+__rte_internal
+int
+rte_mempool_event_callback_unregister(rte_mempool_event_callback *func,
+ void *user_data);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/mempool/version.map b/lib/mempool/version.map
index 9f77da6fff..1b7d7c5456 100644
--- a/lib/mempool/version.map
+++ b/lib/mempool/version.map
@@ -64,3 +64,11 @@ EXPERIMENTAL {
__rte_mempool_trace_ops_free;
__rte_mempool_trace_set_ops_byname;
};
+
+INTERNAL {
+ global:
+
+ # added in 21.11
+ rte_mempool_event_callback_register;
+ rte_mempool_event_callback_unregister;
+};
--
2.25.1
next prev parent reply other threads:[~2021-10-18 14:41 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-18 9:07 [dpdk-dev] [PATCH 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-08-18 9:07 ` [dpdk-dev] [PATCH 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-12 3:12 ` Jerin Jacob
2021-08-18 9:07 ` [dpdk-dev] [PATCH 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-08-18 9:07 ` [dpdk-dev] [PATCH 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-08-18 9:07 ` [dpdk-dev] [PATCH 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-09-29 14:52 ` [dpdk-dev] [PATCH 0/4] net/mlx5: implicit " dkozlyuk
2021-09-29 14:52 ` [dpdk-dev] [PATCH v2 1/4] mempool: add event callbacks dkozlyuk
2021-10-05 16:34 ` Thomas Monjalon
2021-09-29 14:52 ` [dpdk-dev] [PATCH v2 2/4] mempool: add non-IO flag dkozlyuk
2021-10-05 16:39 ` Thomas Monjalon
2021-10-12 6:06 ` Andrew Rybchenko
2021-09-29 14:52 ` [dpdk-dev] [PATCH v2 3/4] common/mlx5: add mempool registration facilities dkozlyuk
2021-09-29 14:52 ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: support mempool registration dkozlyuk
2021-10-12 0:04 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-12 0:04 ` [dpdk-dev] [PATCH v3 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-12 6:33 ` Andrew Rybchenko
2021-10-12 9:37 ` Dmitry Kozlyuk
2021-10-12 9:46 ` Andrew Rybchenko
2021-10-12 0:04 ` [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-12 3:37 ` Jerin Jacob
2021-10-12 6:42 ` Andrew Rybchenko
2021-10-12 12:40 ` Dmitry Kozlyuk
2021-10-12 12:53 ` Andrew Rybchenko
2021-10-12 13:11 ` Dmitry Kozlyuk
2021-10-12 0:04 ` [dpdk-dev] [PATCH v3 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-12 0:04 ` [dpdk-dev] [PATCH v3 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-13 11:01 ` [dpdk-dev] [PATCH v4 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-13 11:01 ` [dpdk-dev] [PATCH v4 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-15 8:52 ` Andrew Rybchenko
2021-10-15 9:13 ` Dmitry Kozlyuk
2021-10-19 13:08 ` Dmitry Kozlyuk
2021-10-15 12:12 ` Olivier Matz
2021-10-15 13:07 ` Dmitry Kozlyuk
2021-10-15 13:40 ` Olivier Matz
2021-10-13 11:01 ` [dpdk-dev] [PATCH v4 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-15 9:01 ` Andrew Rybchenko
2021-10-15 9:18 ` Dmitry Kozlyuk
2021-10-15 9:33 ` Andrew Rybchenko
2021-10-15 9:38 ` Dmitry Kozlyuk
2021-10-15 9:43 ` Olivier Matz
2021-10-15 9:58 ` Dmitry Kozlyuk
2021-10-15 12:11 ` Olivier Matz
2021-10-15 9:25 ` David Marchand
2021-10-15 10:42 ` Dmitry Kozlyuk
2021-10-15 11:41 ` David Marchand
2021-10-15 12:13 ` Olivier Matz
2021-10-15 13:19 ` Olivier Matz
2021-10-15 13:27 ` Dmitry Kozlyuk
2021-10-15 13:43 ` Olivier Matz
2021-10-19 13:08 ` Dmitry Kozlyuk
2021-10-13 11:01 ` [dpdk-dev] [PATCH v4 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-13 11:01 ` [dpdk-dev] [PATCH v4 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-15 16:02 ` [dpdk-dev] [PATCH v5 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-15 16:02 ` [dpdk-dev] [PATCH v5 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-20 9:29 ` Kinsella, Ray
2021-10-15 16:02 ` [dpdk-dev] [PATCH v5 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-15 16:02 ` [dpdk-dev] [PATCH v5 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-20 9:30 ` Kinsella, Ray
2021-10-15 16:02 ` [dpdk-dev] [PATCH v5 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-16 20:00 ` [dpdk-dev] [PATCH v6 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-16 20:00 ` [dpdk-dev] [PATCH v6 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-16 20:00 ` [dpdk-dev] [PATCH v6 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-16 20:00 ` [dpdk-dev] [PATCH v6 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-16 20:00 ` [dpdk-dev] [PATCH v6 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-18 10:01 ` [dpdk-dev] [PATCH v7 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-18 10:01 ` [dpdk-dev] [PATCH v7 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-18 10:01 ` [dpdk-dev] [PATCH v7 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-18 10:01 ` [dpdk-dev] [PATCH v7 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-18 10:01 ` [dpdk-dev] [PATCH v7 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-18 14:40 ` [dpdk-dev] [PATCH v8 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-18 14:40 ` Dmitry Kozlyuk [this message]
2021-10-18 14:40 ` [dpdk-dev] [PATCH v8 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-29 3:30 ` Jiang, YuX
2021-10-18 14:40 ` [dpdk-dev] [PATCH v8 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-18 14:40 ` [dpdk-dev] [PATCH v8 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-18 22:43 ` [dpdk-dev] [PATCH v9 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-18 22:43 ` [dpdk-dev] [PATCH v9 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-18 22:43 ` [dpdk-dev] [PATCH v9 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-18 22:43 ` [dpdk-dev] [PATCH v9 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-18 22:43 ` [dpdk-dev] [PATCH v9 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-19 14:36 ` [dpdk-dev] [PATCH v9 0/4] net/mlx5: implicit " Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211018144059.3303406-2-dkozlyuk@nvidia.com \
--to=dkozlyuk@oss.nvidia.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=matan@oss.nvidia.com \
--cc=mdr@ashroe.eu \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).