* [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches
@ 2016-06-16 11:02 Lazaros Koromilas
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 1/3] mempool: deprecate specific get/put functions Lazaros Koromilas
` (4 more replies)
0 siblings, 5 replies; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-16 11:02 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz, Konstantin Ananyev, David Hunt
Updated version of the user-owned cache patchset. It applies on top of
the latest external mempool manager patches from David Hunt [1].
[1] http://dpdk.org/ml/archives/dev/2016-June/041479.html
v3 changes:
* Deprecate specific mempool API calls instead of removing them.
* Split deprecation into a separate commit to limit noise.
* Fix cache flush by setting cache->len = 0 and make it inline.
* Remove cache->size == 0 checks and ensure size != 0 at creation.
* Fix tests to check if cache creation succeeded.
* Fix tests to free allocated resources on error.
The mempool cache is only available to EAL threads as a per-lcore
resource. Change this so that the user can create and provide their own
cache on mempool get and put operations. This works with non-EAL threads
too.
Also, deprecate the explicit {mp,sp}_put and {mc,sc}_get calls and
re-route them through the new generic calls. Minor cleanup to pass the
mempool bit flags instead of using specific is_mp and is_mc. The old
cache-oblivious API calls use the per-lcore default local cache. The
mempool and mempool_perf tests are also updated to handle the
user-owned cache case.
Introduced API calls:
rte_mempool_cache_create(size, socket_id)
rte_mempool_cache_free(cache)
rte_mempool_cache_flush(cache, mp)
rte_mempool_default_cache(mp, lcore_id)
rte_mempool_generic_put(mp, obj_table, n, cache, flags)
rte_mempool_generic_get(mp, obj_table, n, cache, flags)
Deprecated API calls:
rte_mempool_mp_put_bulk(mp, obj_table, n)
rte_mempool_sp_put_bulk(mp, obj_table, n)
rte_mempool_mp_put(mp, obj)
rte_mempool_sp_put(mp, obj)
rte_mempool_mc_get_bulk(mp, obj_table, n)
rte_mempool_sc_get_bulk(mp, obj_table, n)
rte_mempool_mc_get(mp, obj_p)
rte_mempool_sc_get(mp, obj_p)
Lazaros Koromilas (3):
mempool: deprecate specific get/put functions
mempool: use bit flags instead of is_mp and is_mc
mempool: allow for user-owned mempool caches
app/test/test_mempool.c | 104 +++++++++++-----
app/test/test_mempool_perf.c | 70 +++++++++--
lib/librte_mempool/rte_mempool.c | 66 +++++++++-
lib/librte_mempool/rte_mempool.h | 256 +++++++++++++++++++++++++++++----------
4 files changed, 385 insertions(+), 111 deletions(-)
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] mempool: deprecate specific get/put functions
2016-06-16 11:02 [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches Lazaros Koromilas
@ 2016-06-16 11:02 ` Lazaros Koromilas
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 2/3] mempool: use bit flags instead of is_mp and is_mc Lazaros Koromilas
` (3 subsequent siblings)
4 siblings, 0 replies; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-16 11:02 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz, Konstantin Ananyev, David Hunt
This commit introduces the API calls:
rte_mempool_generic_put(mp, obj_table, n, is_mp)
rte_mempool_generic_get(mp, obj_table, n, is_mc)
Deprecates the API calls:
rte_mempool_mp_put_bulk(mp, obj_table, n)
rte_mempool_sp_put_bulk(mp, obj_table, n)
rte_mempool_mp_put(mp, obj)
rte_mempool_sp_put(mp, obj)
rte_mempool_mc_get_bulk(mp, obj_table, n)
rte_mempool_sc_get_bulk(mp, obj_table, n)
rte_mempool_mc_get(mp, obj_p)
rte_mempool_sc_get(mp, obj_p)
We also check cookies in one place now.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
---
app/test/test_mempool.c | 10 ++--
lib/librte_mempool/rte_mempool.h | 115 +++++++++++++++++++++++++++------------
2 files changed, 85 insertions(+), 40 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index bcf379b..10d706f 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -338,7 +338,7 @@ static int test_mempool_single_producer(void)
printf("obj not owned by this mempool\n");
RET_ERR();
}
- rte_mempool_sp_put(mp_spsc, obj);
+ rte_mempool_put(mp_spsc, obj);
rte_spinlock_lock(&scsp_spinlock);
scsp_obj_table[i] = NULL;
rte_spinlock_unlock(&scsp_spinlock);
@@ -371,7 +371,7 @@ static int test_mempool_single_consumer(void)
rte_spinlock_unlock(&scsp_spinlock);
if (i >= MAX_KEEP)
continue;
- if (rte_mempool_sc_get(mp_spsc, &obj) < 0)
+ if (rte_mempool_get(mp_spsc, &obj) < 0)
break;
rte_spinlock_lock(&scsp_spinlock);
scsp_obj_table[i] = obj;
@@ -477,13 +477,13 @@ test_mempool_basic_ex(struct rte_mempool *mp)
}
for (i = 0; i < MEMPOOL_SIZE; i ++) {
- if (rte_mempool_mc_get(mp, &obj[i]) < 0) {
+ if (rte_mempool_get(mp, &obj[i]) < 0) {
printf("test_mp_basic_ex fail to get object for [%u]\n",
i);
goto fail_mp_basic_ex;
}
}
- if (rte_mempool_mc_get(mp, &err_obj) == 0) {
+ if (rte_mempool_get(mp, &err_obj) == 0) {
printf("test_mempool_basic_ex get an impossible obj\n");
goto fail_mp_basic_ex;
}
@@ -494,7 +494,7 @@ test_mempool_basic_ex(struct rte_mempool *mp)
}
for (i = 0; i < MEMPOOL_SIZE; i++)
- rte_mempool_mp_put(mp, obj[i]);
+ rte_mempool_put(mp, obj[i]);
if (rte_mempool_full(mp) != 1) {
printf("test_mempool_basic_ex the mempool should be full\n");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 92deb42..7446843 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -953,8 +953,8 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* Mono-producer (0) or multi-producers (1).
*/
static inline void __attribute__((always_inline))
-__mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n, int is_mp)
{
struct rte_mempool_cache *cache;
uint32_t index;
@@ -1012,7 +1012,7 @@ ring_enqueue:
/**
- * Put several objects back in the mempool (multi-producers safe).
+ * Put several objects back in the mempool.
*
* @param mp
* A pointer to the mempool structure.
@@ -1020,16 +1020,37 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
+ * @param is_mp
+ * Mono-producer (0) or multi-producers (1).
*/
static inline void __attribute__((always_inline))
+rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n, int is_mp)
+{
+ __mempool_check_cookies(mp, obj_table, n, 0);
+ __mempool_generic_put(mp, obj_table, n, is_mp);
+}
+
+/**
+ * @deprecated
+ * Put several objects back in the mempool (multi-producers safe).
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to add in the mempool from the obj_table.
+ */
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, 1);
+ rte_mempool_generic_put(mp, obj_table, n, 1);
}
/**
+ * @deprecated
* Put several objects back in the mempool (NOT multi-producers safe).
*
* @param mp
@@ -1039,12 +1060,11 @@ rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
* @param n
* The number of objects to add in the mempool from obj_table.
*/
-static inline void
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, 0);
+ rte_mempool_generic_put(mp, obj_table, n, 0);
}
/**
@@ -1065,11 +1085,12 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, !(mp->flags & MEMPOOL_F_SP_PUT));
+ rte_mempool_generic_put(mp, obj_table, n,
+ !(mp->flags & MEMPOOL_F_SP_PUT));
}
/**
+ * @deprecated
* Put one object in the mempool (multi-producers safe).
*
* @param mp
@@ -1077,13 +1098,14 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
* @param obj
* A pointer to the object to be added.
*/
-static inline void __attribute__((always_inline))
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_mp_put_bulk(mp, &obj, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 1);
}
/**
+ * @deprecated
* Put one object back in the mempool (NOT multi-producers safe).
*
* @param mp
@@ -1091,10 +1113,10 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
* @param obj
* A pointer to the object to be added.
*/
-static inline void __attribute__((always_inline))
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_sp_put_bulk(mp, &obj, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 0);
}
/**
@@ -1130,8 +1152,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* - <0: Error; code of ring dequeue function.
*/
static inline int __attribute__((always_inline))
-__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
- unsigned n, int is_mc)
+__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
+ unsigned n, int is_mc)
{
int ret;
struct rte_mempool_cache *cache;
@@ -1193,7 +1215,7 @@ ring_dequeue:
}
/**
- * Get several objects from the mempool (multi-consumers safe).
+ * Get several objects from the mempool.
*
* If cache is enabled, objects will be retrieved first from cache,
* subsequently from the common pool. Note that it can return -ENOENT when
@@ -1206,21 +1228,50 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
+ * @param is_mc
+ * Mono-consumer (0) or multi-consumers (1).
* @return
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
static inline int __attribute__((always_inline))
-rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
+ int is_mc)
{
int ret;
- ret = __mempool_get_bulk(mp, obj_table, n, 1);
+ ret = __mempool_generic_get(mp, obj_table, n, is_mc);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
}
/**
+ * @deprecated
+ * Get several objects from the mempool (multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ * The number of objects to get from mempool to obj_table.
+ * @return
+ * - 0: Success; objects taken.
+ * - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+__rte_deprecated static inline int __attribute__((always_inline))
+rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+ return rte_mempool_generic_get(mp, obj_table, n, 1);
+}
+
+/**
+ * @deprecated
* Get several objects from the mempool (NOT multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1239,14 +1290,10 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
* - -ENOENT: Not enough entries in the mempool; no object is
* retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- int ret;
- ret = __mempool_get_bulk(mp, obj_table, n, 0);
- if (ret == 0)
- __mempool_check_cookies(mp, obj_table, n, 1);
- return ret;
+ return rte_mempool_generic_get(mp, obj_table, n, 0);
}
/**
@@ -1274,15 +1321,12 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- int ret;
- ret = __mempool_get_bulk(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SC_GET));
- if (ret == 0)
- __mempool_check_cookies(mp, obj_table, n, 1);
- return ret;
+ return rte_mempool_generic_get(mp, obj_table, n,
+ !(mp->flags & MEMPOOL_F_SC_GET));
}
/**
+ * @deprecated
* Get one object from the mempool (multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1298,13 +1342,14 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_mc_get_bulk(mp, obj_p, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 1);
}
/**
+ * @deprecated
* Get one object from the mempool (NOT multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1320,10 +1365,10 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_sc_get_bulk(mp, obj_p, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 0);
}
/**
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] mempool: use bit flags instead of is_mp and is_mc
2016-06-16 11:02 [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches Lazaros Koromilas
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 1/3] mempool: deprecate specific get/put functions Lazaros Koromilas
@ 2016-06-16 11:02 ` Lazaros Koromilas
2016-06-17 10:36 ` Olivier Matz
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
` (2 subsequent siblings)
4 siblings, 1 reply; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-16 11:02 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz, Konstantin Ananyev, David Hunt
Pass the same flags as in rte_mempool_create(). Changes API calls:
rte_mempool_generic_put(mp, obj_table, n, flags)
rte_mempool_generic_get(mp, obj_table, n, flags)
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
---
lib/librte_mempool/rte_mempool.h | 58 +++++++++++++++++++++-------------------
1 file changed, 30 insertions(+), 28 deletions(-)
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 7446843..191edba 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -949,12 +949,13 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* @param n
* The number of objects to store back in the mempool, must be strictly
* positive.
- * @param is_mp
- * Mono-producer (0) or multi-producers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+ unsigned n, int flags)
{
struct rte_mempool_cache *cache;
uint32_t index;
@@ -967,7 +968,7 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
__MEMPOOL_STAT_ADD(mp, put, n);
/* cache is not enabled or single producer or non-EAL thread */
- if (unlikely(cache_size == 0 || is_mp == 0 ||
+ if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SP_PUT ||
lcore_id >= RTE_MAX_LCORE))
goto ring_enqueue;
@@ -1020,15 +1021,16 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
- * @param is_mp
- * Mono-producer (0) or multi-producers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+ unsigned n, int flags)
{
__mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_generic_put(mp, obj_table, n, is_mp);
+ __mempool_generic_put(mp, obj_table, n, flags);
}
/**
@@ -1046,7 +1048,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 1);
+ rte_mempool_generic_put(mp, obj_table, n, 0);
}
/**
@@ -1064,7 +1066,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 0);
+ rte_mempool_generic_put(mp, obj_table, n, MEMPOOL_F_SP_PUT);
}
/**
@@ -1085,8 +1087,7 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SP_PUT));
+ rte_mempool_generic_put(mp, obj_table, n, mp->flags);
}
/**
@@ -1101,7 +1102,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 0);
}
/**
@@ -1116,7 +1117,7 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 0);
+ rte_mempool_generic_put(mp, &obj, 1, MEMPOOL_F_SP_PUT);
}
/**
@@ -1145,15 +1146,16 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to get, must be strictly positive.
- * @param is_mc
- * Mono-consumer (0) or multi-consumers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
* @return
* - >=0: Success; number of objects supplied.
* - <0: Error; code of ring dequeue function.
*/
static inline int __attribute__((always_inline))
__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
- unsigned n, int is_mc)
+ unsigned n, int flags)
{
int ret;
struct rte_mempool_cache *cache;
@@ -1163,7 +1165,7 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
uint32_t cache_size = mp->cache_size;
/* cache is not enabled or single consumer */
- if (unlikely(cache_size == 0 || is_mc == 0 ||
+ if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SC_GET ||
n >= cache_size || lcore_id >= RTE_MAX_LCORE))
goto ring_dequeue;
@@ -1228,18 +1230,19 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
- * @param is_mc
- * Mono-consumer (0) or multi-consumers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
* @return
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
static inline int __attribute__((always_inline))
rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
- int is_mc)
+ int flags)
{
int ret;
- ret = __mempool_generic_get(mp, obj_table, n, is_mc);
+ ret = __mempool_generic_get(mp, obj_table, n, flags);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
@@ -1267,7 +1270,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 1);
+ return rte_mempool_generic_get(mp, obj_table, n, 0);
}
/**
@@ -1293,7 +1296,7 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 0);
+ return rte_mempool_generic_get(mp, obj_table, n, MEMPOOL_F_SC_GET);
}
/**
@@ -1321,8 +1324,7 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SC_GET));
+ return rte_mempool_generic_get(mp, obj_table, n, mp->flags);
}
/**
@@ -1345,7 +1347,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 0);
}
/**
@@ -1368,7 +1370,7 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 0);
+ return rte_mempool_generic_get(mp, obj_p, 1, MEMPOOL_F_SC_GET);
}
/**
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches
2016-06-16 11:02 [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches Lazaros Koromilas
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 1/3] mempool: deprecate specific get/put functions Lazaros Koromilas
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 2/3] mempool: use bit flags instead of is_mp and is_mc Lazaros Koromilas
@ 2016-06-16 11:02 ` Lazaros Koromilas
2016-06-17 10:37 ` Olivier Matz
2016-06-17 10:36 ` [dpdk-dev] [PATCH v3 0/3] mempool: " Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
4 siblings, 1 reply; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-16 11:02 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz, Konstantin Ananyev, David Hunt
The mempool cache is only available to EAL threads as a per-lcore
resource. Change this so that the user can create and provide their own
cache on mempool get and put operations. This works with non-EAL threads
too. This commit introduces the new API calls:
rte_mempool_cache_create(size, socket_id)
rte_mempool_cache_free(cache)
rte_mempool_cache_flush(cache, mp)
rte_mempool_default_cache(mp, lcore_id)
Changes the API calls:
rte_mempool_generic_put(mp, obj_table, n, cache, flags)
rte_mempool_generic_get(mp, obj_table, n, cache, flags)
The cache-oblivious API calls use the per-lcore default local cache.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
---
app/test/test_mempool.c | 94 ++++++++++++++++------
app/test/test_mempool_perf.c | 70 ++++++++++++++---
lib/librte_mempool/rte_mempool.c | 66 +++++++++++++++-
lib/librte_mempool/rte_mempool.h | 163 ++++++++++++++++++++++++++++-----------
4 files changed, 310 insertions(+), 83 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 10d706f..723cd39 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -79,6 +79,9 @@
printf("test failed at %s():%d\n", __func__, __LINE__); \
return -1; \
} while (0)
+#define LOG_ERR() do { \
+ printf("test failed at %s():%d\n", __func__, __LINE__); \
+ } while (0)
static rte_atomic32_t synchro;
@@ -191,7 +194,7 @@ my_obj_init(struct rte_mempool *mp, __attribute__((unused)) void *arg,
/* basic tests (done on one core) */
static int
-test_mempool_basic(struct rte_mempool *mp)
+test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
{
uint32_t *objnum;
void **objtable;
@@ -199,47 +202,79 @@ test_mempool_basic(struct rte_mempool *mp)
char *obj_data;
int ret = 0;
unsigned i, j;
+ int offset;
+ struct rte_mempool_cache *cache;
+
+ if (use_external_cache) {
+ /* Create a user-owned mempool cache. */
+ cache = rte_mempool_cache_create(RTE_MEMPOOL_CACHE_MAX_SIZE,
+ SOCKET_ID_ANY);
+ if (cache == NULL)
+ RET_ERR();
+ } else {
+ /* May be NULL if cache is disabled. */
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ }
/* dump the mempool status */
rte_mempool_dump(stdout, mp);
printf("get an object\n");
- if (rte_mempool_get(mp, &obj) < 0)
- RET_ERR();
+ if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0) {
+ LOG_ERR();
+ ret = -1;
+ goto out;
+ }
rte_mempool_dump(stdout, mp);
/* tests that improve coverage */
printf("get object count\n");
- if (rte_mempool_count(mp) != MEMPOOL_SIZE - 1)
- RET_ERR();
+ /* We have to count the extra caches, one in this case. */
+ offset = use_external_cache ? 1 * cache->len : 0;
+ if (rte_mempool_count(mp) + offset != MEMPOOL_SIZE - 1) {
+ LOG_ERR();
+ ret = -1;
+ goto out;
+ }
printf("get private data\n");
if (rte_mempool_get_priv(mp) != (char *)mp +
- MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
- RET_ERR();
+ MEMPOOL_HEADER_SIZE(mp, mp->cache_size)) {
+ LOG_ERR();
+ ret = -1;
+ goto out;
+ }
#ifndef RTE_EXEC_ENV_BSDAPP /* rte_mem_virt2phy() not supported on bsd */
printf("get physical address of an object\n");
- if (rte_mempool_virt2phy(mp, obj) != rte_mem_virt2phy(obj))
- RET_ERR();
+ if (rte_mempool_virt2phy(mp, obj) != rte_mem_virt2phy(obj)) {
+ LOG_ERR();
+ ret = -1;
+ goto out;
+ }
#endif
printf("put the object back\n");
- rte_mempool_put(mp, obj);
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
rte_mempool_dump(stdout, mp);
printf("get 2 objects\n");
- if (rte_mempool_get(mp, &obj) < 0)
- RET_ERR();
- if (rte_mempool_get(mp, &obj2) < 0) {
- rte_mempool_put(mp, obj);
- RET_ERR();
+ if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0) {
+ LOG_ERR();
+ ret = -1;
+ goto out;
+ }
+ if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+ LOG_ERR();
+ ret = -1;
+ goto out;
}
rte_mempool_dump(stdout, mp);
printf("put the objects back\n");
- rte_mempool_put(mp, obj);
- rte_mempool_put(mp, obj2);
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+ rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
rte_mempool_dump(stdout, mp);
/*
@@ -247,11 +282,14 @@ test_mempool_basic(struct rte_mempool *mp)
* on other cores may not be empty.
*/
objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
- if (objtable == NULL)
- RET_ERR();
+ if (objtable == NULL) {
+ LOG_ERR();
+ ret = -1;
+ goto out;
+ }
for (i = 0; i < MEMPOOL_SIZE; i++) {
- if (rte_mempool_get(mp, &objtable[i]) < 0)
+ if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
break;
}
@@ -273,13 +311,19 @@ test_mempool_basic(struct rte_mempool *mp)
ret = -1;
}
- rte_mempool_put(mp, objtable[i]);
+ rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
}
free(objtable);
if (ret == -1)
printf("objects were modified!\n");
+out:
+ if (use_external_cache) {
+ rte_mempool_cache_flush(cache, mp);
+ rte_mempool_cache_free(cache);
+ }
+
return ret;
}
@@ -631,11 +675,15 @@ test_mempool(void)
rte_mempool_list_dump(stdout);
/* basic tests without cache */
- if (test_mempool_basic(mp_nocache) < 0)
+ if (test_mempool_basic(mp_nocache, 0) < 0)
goto err;
/* basic tests with cache */
- if (test_mempool_basic(mp_cache) < 0)
+ if (test_mempool_basic(mp_cache, 0) < 0)
+ goto err;
+
+ /* basic tests with user-owned cache */
+ if (test_mempool_basic(mp_nocache, 1) < 0)
goto err;
/* more basic tests without cache */
diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5f8455..cb03cc6 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -78,6 +78,9 @@
* - One core without cache
* - Two cores without cache
* - Max. cores without cache
+ * - One core with user-owned cache
+ * - Two cores with user-owned cache
+ * - Max. cores with user-owned cache
*
* - Bulk size (*n_get_bulk*, *n_put_bulk*)
*
@@ -98,6 +101,8 @@
static struct rte_mempool *mp;
static struct rte_mempool *mp_cache, *mp_nocache;
+static int use_external_cache;
+static unsigned external_cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE;
static rte_atomic32_t synchro;
@@ -134,15 +139,31 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
void *obj_table[MAX_KEEP];
unsigned i, idx;
unsigned lcore_id = rte_lcore_id();
- int ret;
+ int ret = 0;
uint64_t start_cycles, end_cycles;
uint64_t time_diff = 0, hz = rte_get_timer_hz();
+ struct rte_mempool_cache *cache;
+
+ if (use_external_cache) {
+ /* Create a user-owned mempool cache. */
+ cache = rte_mempool_cache_create(external_cache_size,
+ SOCKET_ID_ANY);
+ if (cache == NULL)
+ return -1;
+ } else {
+ /* May be NULL if cache is disabled. */
+ cache = rte_mempool_default_cache(mp, lcore_id);
+ }
/* n_get_bulk and n_put_bulk must be divisors of n_keep */
- if (((n_keep / n_get_bulk) * n_get_bulk) != n_keep)
- return -1;
- if (((n_keep / n_put_bulk) * n_put_bulk) != n_keep)
- return -1;
+ if (((n_keep / n_get_bulk) * n_get_bulk) != n_keep) {
+ ret = -1;
+ goto out;
+ }
+ if (((n_keep / n_put_bulk) * n_put_bulk) != n_keep) {
+ ret = -1;
+ goto out;
+ }
stats[lcore_id].enq_count = 0;
@@ -157,12 +178,14 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
/* get n_keep objects by bulk of n_bulk */
idx = 0;
while (idx < n_keep) {
- ret = rte_mempool_get_bulk(mp, &obj_table[idx],
- n_get_bulk);
+ ret = rte_mempool_generic_get(mp, &obj_table[idx],
+ n_get_bulk,
+ cache, 0);
if (unlikely(ret < 0)) {
rte_mempool_dump(stdout, mp);
/* in this case, objects are lost... */
- return -1;
+ ret = -1;
+ goto out;
}
idx += n_get_bulk;
}
@@ -170,8 +193,9 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
/* put the objects back */
idx = 0;
while (idx < n_keep) {
- rte_mempool_put_bulk(mp, &obj_table[idx],
- n_put_bulk);
+ rte_mempool_generic_put(mp, &obj_table[idx],
+ n_put_bulk,
+ cache, 0);
idx += n_put_bulk;
}
}
@@ -180,7 +204,13 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
stats[lcore_id].enq_count += N;
}
- return 0;
+out:
+ if (use_external_cache) {
+ rte_mempool_cache_flush(cache, mp);
+ rte_mempool_cache_free(cache);
+ }
+
+ return ret;
}
/* launch all the per-lcore test, and display the result */
@@ -199,7 +229,9 @@ launch_cores(unsigned cores)
printf("mempool_autotest cache=%u cores=%u n_get_bulk=%u "
"n_put_bulk=%u n_keep=%u ",
- (unsigned) mp->cache_size, cores, n_get_bulk, n_put_bulk, n_keep);
+ use_external_cache ?
+ external_cache_size : (unsigned) mp->cache_size,
+ cores, n_get_bulk, n_put_bulk, n_keep);
if (rte_mempool_count(mp) != MEMPOOL_SIZE) {
printf("mempool is not full\n");
@@ -323,6 +355,20 @@ test_mempool_perf(void)
if (do_one_mempool_test(rte_lcore_count()) < 0)
return -1;
+ /* performance test with 1, 2 and max cores */
+ printf("start performance test (with user-owned cache)\n");
+ mp = mp_nocache;
+ use_external_cache = 1;
+
+ if (do_one_mempool_test(1) < 0)
+ return -1;
+
+ if (do_one_mempool_test(2) < 0)
+ return -1;
+
+ if (do_one_mempool_test(rte_lcore_count()) < 0)
+ return -1;
+
rte_mempool_list_dump(stdout);
return 0;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 2776479..b04cab7 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -673,6 +673,53 @@ rte_mempool_free(struct rte_mempool *mp)
rte_memzone_free(mp->mz);
}
+static void
+mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size)
+{
+ cache->size = size;
+ cache->flushthresh = CALC_CACHE_FLUSHTHRESH(size);
+ cache->len = 0;
+}
+
+/*
+ * Create and initialize a cache for objects that are retrieved from and
+ * returned to an underlying mempool. This structure is identical to the
+ * local_cache[lcore_id] pointed to by the mempool structure.
+ */
+struct rte_mempool_cache *
+rte_mempool_cache_create(uint32_t size, int socket_id)
+{
+ struct rte_mempool_cache *cache;
+
+ if (size == 0 || size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (cache == NULL) {
+ RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ mempool_cache_init(cache, size);
+
+ return cache;
+}
+
+/*
+ * Free a cache. It's the responsibility of the user to make sure that any
+ * remaining objects in the cache are flushed to the corresponding
+ * mempool.
+ */
+void
+rte_mempool_cache_free(struct rte_mempool_cache *cache)
+{
+ rte_free(cache);
+}
+
/* create an empty mempool */
struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
@@ -687,6 +734,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
size_t mempool_size;
int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
struct rte_mempool_objsz objsz;
+ unsigned lcore_id;
int ret;
/* compilation-time checks */
@@ -767,8 +815,8 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mp->elt_size = objsz.elt_size;
mp->header_size = objsz.header_size;
mp->trailer_size = objsz.trailer_size;
+ /* Size of default caches, zero means disabled. */
mp->cache_size = cache_size;
- mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
mp->private_data_size = private_data_size;
STAILQ_INIT(&mp->elt_list);
STAILQ_INIT(&mp->mem_list);
@@ -780,6 +828,13 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mp->local_cache = (struct rte_mempool_cache *)
RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
+ /* Init all default caches. */
+ if (cache_size != 0) {
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
+ mempool_cache_init(&mp->local_cache[lcore_id],
+ cache_size);
+ }
+
te->data = mp;
rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
@@ -935,7 +990,7 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
unsigned count = 0;
unsigned cache_count;
- fprintf(f, " cache infos:\n");
+ fprintf(f, " internal cache infos:\n");
fprintf(f, " cache_size=%"PRIu32"\n", mp->cache_size);
if (mp->cache_size == 0)
@@ -943,7 +998,8 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
cache_count = mp->local_cache[lcore_id].len;
- fprintf(f, " cache_count[%u]=%u\n", lcore_id, cache_count);
+ fprintf(f, " cache_count[%u]=%"PRIu32"\n",
+ lcore_id, cache_count);
count += cache_count;
}
fprintf(f, " total_cache_count=%u\n", count);
@@ -1062,7 +1118,9 @@ mempool_audit_cache(const struct rte_mempool *mp)
return;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {
+ const struct rte_mempool_cache *cache;
+ cache = &mp->local_cache[lcore_id];
+ if (cache->len > cache->flushthresh) {
RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n",
lcore_id);
rte_panic("MEMPOOL: invalid cache len\n");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 191edba..c9dd415 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -101,7 +101,9 @@ struct rte_mempool_debug_stats {
* A structure that stores a per-core object cache.
*/
struct rte_mempool_cache {
- unsigned len; /**< Cache len */
+ uint32_t size; /**< Size of the cache */
+ uint32_t flushthresh; /**< Threshold before we flush excess elements */
+ uint32_t len; /**< Current cache count */
/*
* Cache is allocated to this size to allow it to overflow in certain
* cases to avoid needless emptying of cache.
@@ -212,9 +214,8 @@ struct rte_mempool {
int flags; /**< Flags of the mempool. */
int socket_id; /**< Socket id passed at create. */
uint32_t size; /**< Max size of the mempool. */
- uint32_t cache_size; /**< Size of per-lcore local cache. */
- uint32_t cache_flushthresh;
- /**< Threshold before we flush excess elements. */
+ uint32_t cache_size;
+ /**< Size of per-lcore default local cache. */
uint32_t elt_size; /**< Size of an element. */
uint32_t header_size; /**< Size of header (before elt). */
@@ -941,6 +942,70 @@ uint32_t rte_mempool_mem_iter(struct rte_mempool *mp,
void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
/**
+ * Create a user-owned mempool cache.
+ *
+ * This can be used by non-EAL threads to enable caching when they
+ * interact with a mempool.
+ *
+ * @param size
+ * The size of the mempool cache. See rte_mempool_create()'s cache_size
+ * parameter description for more information. The same limits and
+ * considerations apply here too.
+ * @param socket_id
+ * The socket identifier in the case of NUMA. The value can be
+ * SOCKET_ID_ANY if there is no NUMA constraint for the reserved zone.
+ */
+struct rte_mempool_cache *
+rte_mempool_cache_create(uint32_t size, int socket_id);
+
+/**
+ * Free a user-owned mempool cache.
+ *
+ * @param cache
+ * A pointer to the mempool cache.
+ */
+void
+rte_mempool_cache_free(struct rte_mempool_cache *cache);
+
+/**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ * A pointer to the mempool cache.
+ * @param mp
+ * A pointer to the mempool.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+ struct rte_mempool *mp)
+{
+ rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+ cache->len = 0;
+}
+
+/**
+ * Get a pointer to the per-lcore default mempool cache.
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param lcore_id
+ * The logical core id.
+ * @return
+ * A pointer to the mempool cache or NULL if disabled or non-EAL thread.
+ */
+static inline struct rte_mempool_cache * __attribute__((always_inline))
+rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
+{
+ if (mp->cache_size == 0)
+ return NULL;
+
+ if (lcore_id >= RTE_MAX_LCORE)
+ return NULL;
+
+ return &mp->local_cache[lcore_id];
+}
+
+/**
* @internal Put several objects back in the mempool; used internally.
* @param mp
* A pointer to the mempool structure.
@@ -949,34 +1014,30 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* @param n
* The number of objects to store back in the mempool, must be strictly
* positive.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
- struct rte_mempool_cache *cache;
uint32_t index;
void **cache_objs;
- unsigned lcore_id = rte_lcore_id();
- uint32_t cache_size = mp->cache_size;
- uint32_t flushthresh = mp->cache_flushthresh;
/* increment stat now, adding in mempool always success */
__MEMPOOL_STAT_ADD(mp, put, n);
- /* cache is not enabled or single producer or non-EAL thread */
- if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SP_PUT ||
- lcore_id >= RTE_MAX_LCORE))
+ /* No cache provided or single producer */
+ if (unlikely(cache == NULL || flags & MEMPOOL_F_SP_PUT))
goto ring_enqueue;
/* Go straight to ring if put would overflow mem allocated for cache */
if (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache = &mp->local_cache[lcore_id];
cache_objs = &cache->objs[cache->len];
/*
@@ -992,10 +1053,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
cache->len += n;
- if (cache->len >= flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
- cache->len - cache_size);
- cache->len = cache_size;
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
}
return;
@@ -1021,16 +1082,18 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
__mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_generic_put(mp, obj_table, n, flags);
+ __mempool_generic_put(mp, obj_table, n, cache, flags);
}
/**
@@ -1048,7 +1111,9 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, obj_table, n, cache, 0);
}
/**
@@ -1066,7 +1131,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, MEMPOOL_F_SP_PUT);
+ rte_mempool_generic_put(mp, obj_table, n, NULL, MEMPOOL_F_SP_PUT);
}
/**
@@ -1087,7 +1152,9 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, mp->flags);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
}
/**
@@ -1102,7 +1169,9 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
}
/**
@@ -1117,7 +1186,7 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, MEMPOOL_F_SP_PUT);
+ rte_mempool_generic_put(mp, &obj, 1, NULL, MEMPOOL_F_SP_PUT);
}
/**
@@ -1146,6 +1215,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to get, must be strictly positive.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
@@ -1155,27 +1226,23 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
*/
static inline int __attribute__((always_inline))
__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
int ret;
- struct rte_mempool_cache *cache;
uint32_t index, len;
void **cache_objs;
- unsigned lcore_id = rte_lcore_id();
- uint32_t cache_size = mp->cache_size;
- /* cache is not enabled or single consumer */
- if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SC_GET ||
- n >= cache_size || lcore_id >= RTE_MAX_LCORE))
+ /* No cache provided or single consumer */
+ if (unlikely(cache == NULL || flags & MEMPOOL_F_SC_GET ||
+ n >= cache->size))
goto ring_dequeue;
- cache = &mp->local_cache[lcore_id];
cache_objs = cache->objs;
/* Can this be satisfied from the cache? */
if (cache->len < n) {
/* No. Backfill the cache first, and then fill from it */
- uint32_t req = n + (cache_size - cache->len);
+ uint32_t req = n + (cache->size - cache->len);
/* How many do we require i.e. number to fill the cache + the request */
ret = rte_mempool_ops_dequeue_bulk(mp,
@@ -1230,6 +1297,8 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
@@ -1239,10 +1308,10 @@ ring_dequeue:
*/
static inline int __attribute__((always_inline))
rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
- int flags)
+ struct rte_mempool_cache *cache, int flags)
{
int ret;
- ret = __mempool_generic_get(mp, obj_table, n, flags);
+ ret = __mempool_generic_get(mp, obj_table, n, cache, flags);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
@@ -1270,7 +1339,9 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_table, n, cache, 0);
}
/**
@@ -1296,7 +1367,7 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, MEMPOOL_F_SC_GET);
+ return rte_mempool_generic_get(mp, obj_table, n, NULL, MEMPOOL_F_SC_GET);
}
/**
@@ -1324,7 +1395,9 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, mp->flags);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
}
/**
@@ -1347,7 +1420,9 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_p, 1, cache, 0);
}
/**
@@ -1370,7 +1445,7 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, MEMPOOL_F_SC_GET);
+ return rte_mempool_generic_get(mp, obj_p, 1, NULL, MEMPOOL_F_SC_GET);
}
/**
@@ -1404,7 +1479,7 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
*
* When cache is enabled, this function has to browse the length of
* all lcores, so it should not be used in a data path, but only for
- * debug purposes.
+ * debug purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1423,7 +1498,7 @@ unsigned rte_mempool_count(const struct rte_mempool *mp);
*
* When cache is enabled, this function has to browse the length of
* all lcores, so it should not be used in a data path, but only for
- * debug purposes.
+ * debug purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1441,7 +1516,7 @@ rte_mempool_free_count(const struct rte_mempool *mp)
*
* When cache is enabled, this function has to browse the length of all
* lcores, so it should not be used in a data path, but only for debug
- * purposes.
+ * purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1460,7 +1535,7 @@ rte_mempool_full(const struct rte_mempool *mp)
*
* When cache is enabled, this function has to browse the length of all
* lcores, so it should not be used in a data path, but only for debug
- * purposes.
+ * purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches
2016-06-16 11:02 [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches Lazaros Koromilas
` (2 preceding siblings ...)
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
@ 2016-06-17 10:36 ` Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
4 siblings, 0 replies; 21+ messages in thread
From: Olivier Matz @ 2016-06-17 10:36 UTC (permalink / raw)
To: Lazaros Koromilas, dev; +Cc: Konstantin Ananyev, David Hunt
Hi Lazaros,
On 06/16/2016 01:02 PM, Lazaros Koromilas wrote:
> Updated version of the user-owned cache patchset. It applies on top of
> the latest external mempool manager patches from David Hunt [1].
>
> [1] http://dpdk.org/ml/archives/dev/2016-June/041479.html
>
> v3 changes:
>
> * Deprecate specific mempool API calls instead of removing them.
> * Split deprecation into a separate commit to limit noise.
> * Fix cache flush by setting cache->len = 0 and make it inline.
> * Remove cache->size == 0 checks and ensure size != 0 at creation.
> * Fix tests to check if cache creation succeeded.
> * Fix tests to free allocated resources on error.
Thanks for the update. The patchset looks good to me.
I have some minor comments for patch 2/3 and 3/3.
One more thing: would you mind adding some words in
doc/guides/prog_guide/mempool_lib.rst ?
Thanks,
Olivier
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/3] mempool: use bit flags instead of is_mp and is_mc
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 2/3] mempool: use bit flags instead of is_mp and is_mc Lazaros Koromilas
@ 2016-06-17 10:36 ` Olivier Matz
0 siblings, 0 replies; 21+ messages in thread
From: Olivier Matz @ 2016-06-17 10:36 UTC (permalink / raw)
To: Lazaros Koromilas, dev; +Cc: Konstantin Ananyev, David Hunt
On 06/16/2016 01:02 PM, Lazaros Koromilas wrote:
> Re: [PATCH v3 2/3] mempool: use bit flags instead of is_mp and is_mc
There is a script to check the format of title. The underscores are
now forbidden, because it often reference function or variable names,
which is not ideal in titles.
$ ./scripts/check-git-log.sh
Wrong headline format:
mempool: use bit flags instead of is_mp and is_mc
I suggest something like:
mempool: use bit flags to set multi consumers or producers
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
@ 2016-06-17 10:37 ` Olivier Matz
2016-06-18 16:15 ` Lazaros Koromilas
0 siblings, 1 reply; 21+ messages in thread
From: Olivier Matz @ 2016-06-17 10:37 UTC (permalink / raw)
To: Lazaros Koromilas, dev; +Cc: Konstantin Ananyev, David Hunt
On 06/16/2016 01:02 PM, Lazaros Koromilas wrote:
> The mempool cache is only available to EAL threads as a per-lcore
> resource. Change this so that the user can create and provide their own
> cache on mempool get and put operations. This works with non-EAL threads
> too. This commit introduces the new API calls:
>
> rte_mempool_cache_create(size, socket_id)
> rte_mempool_cache_free(cache)
> rte_mempool_cache_flush(cache, mp)
> rte_mempool_default_cache(mp, lcore_id)
These new functions should be added in the .map file, else it will
break the compilation in with shared_lib=y.
> Changes the API calls:
>
> rte_mempool_generic_put(mp, obj_table, n, cache, flags)
> rte_mempool_generic_get(mp, obj_table, n, cache, flags)
>
> The cache-oblivious API calls use the per-lcore default local cache.
>
> Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
> ---
> app/test/test_mempool.c | 94 ++++++++++++++++------
> app/test/test_mempool_perf.c | 70 ++++++++++++++---
> lib/librte_mempool/rte_mempool.c | 66 +++++++++++++++-
> lib/librte_mempool/rte_mempool.h | 163 ++++++++++++++++++++++++++++-----------
> 4 files changed, 310 insertions(+), 83 deletions(-)
>
> diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
> index 10d706f..723cd39 100644
> --- a/app/test/test_mempool.c
> +++ b/app/test/test_mempool.c
> @@ -79,6 +79,9 @@
> printf("test failed at %s():%d\n", __func__, __LINE__); \
> return -1; \
> } while (0)
> +#define LOG_ERR() do { \
> + printf("test failed at %s():%d\n", __func__, __LINE__); \
> + } while (0)
>
I see that the usage of this macro is always like this:
LOG_ERR();
ret = -1;
goto out;
What do you think of having:
#define LOG_ERR() do { \
printf("test failed at %s():%d\n", __func__, __LINE__); \
} while (0)
#define RET_ERR() do { LOG_ERR(); return -1; } while (0)
#define GOTO_ERR() do { LOG_ERR(); ret = -1; goto out; } while (0)
Then use GOTO_ERR() when appropriate. It would also factorize
the printf.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches
2016-06-17 10:37 ` Olivier Matz
@ 2016-06-18 16:15 ` Lazaros Koromilas
2016-06-20 7:36 ` Olivier Matz
0 siblings, 1 reply; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-18 16:15 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev, Konstantin Ananyev, David Hunt
On Fri, Jun 17, 2016 at 11:37 AM, Olivier Matz <olivier.matz@6wind.com> wrote:
>
>
> On 06/16/2016 01:02 PM, Lazaros Koromilas wrote:
>> The mempool cache is only available to EAL threads as a per-lcore
>> resource. Change this so that the user can create and provide their own
>> cache on mempool get and put operations. This works with non-EAL threads
>> too. This commit introduces the new API calls:
>>
>> rte_mempool_cache_create(size, socket_id)
>> rte_mempool_cache_free(cache)
>> rte_mempool_cache_flush(cache, mp)
>> rte_mempool_default_cache(mp, lcore_id)
>
> These new functions should be added in the .map file, else it will
> break the compilation in with shared_lib=y.
Oops, thanks!
>> Changes the API calls:
>>
>> rte_mempool_generic_put(mp, obj_table, n, cache, flags)
>> rte_mempool_generic_get(mp, obj_table, n, cache, flags)
>>
>> The cache-oblivious API calls use the per-lcore default local cache.
>>
>> Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
>> ---
>> app/test/test_mempool.c | 94 ++++++++++++++++------
>> app/test/test_mempool_perf.c | 70 ++++++++++++++---
>> lib/librte_mempool/rte_mempool.c | 66 +++++++++++++++-
>> lib/librte_mempool/rte_mempool.h | 163 ++++++++++++++++++++++++++++-----------
>> 4 files changed, 310 insertions(+), 83 deletions(-)
>>
>> diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
>> index 10d706f..723cd39 100644
>> --- a/app/test/test_mempool.c
>> +++ b/app/test/test_mempool.c
>> @@ -79,6 +79,9 @@
>> printf("test failed at %s():%d\n", __func__, __LINE__); \
>> return -1; \
>> } while (0)
>> +#define LOG_ERR() do { \
>> + printf("test failed at %s():%d\n", __func__, __LINE__); \
>> + } while (0)
>>
>
> I see that the usage of this macro is always like this:
>
> LOG_ERR();
> ret = -1;
> goto out;
>
> What do you think of having:
>
> #define LOG_ERR() do { \
> printf("test failed at %s():%d\n", __func__, __LINE__); \
> } while (0)
> #define RET_ERR() do { LOG_ERR(); return -1; } while (0)
> #define GOTO_ERR() do { LOG_ERR(); ret = -1; goto out; } while (0)
>
> Then use GOTO_ERR() when appropriate. It would also factorize
> the printf.
The downside of GOTO_ERR() is that it assumes a variable and a label
name. And you may need to have multiple labels 'out0', 'out1', etc for
the error path. How about:
#define GOTO_ERR(ret, out) do { LOG_ERR(); ret = -1; goto out; } while (0)
Lazaros.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches
2016-06-18 16:15 ` Lazaros Koromilas
@ 2016-06-20 7:36 ` Olivier Matz
0 siblings, 0 replies; 21+ messages in thread
From: Olivier Matz @ 2016-06-20 7:36 UTC (permalink / raw)
To: Lazaros Koromilas; +Cc: dev, Konstantin Ananyev, David Hunt
Hi,
On 06/18/2016 06:15 PM, Lazaros Koromilas wrote:
>> What do you think of having:
>>
>> #define LOG_ERR() do { \
>> printf("test failed at %s():%d\n", __func__, __LINE__); \
>> } while (0)
>> #define RET_ERR() do { LOG_ERR(); return -1; } while (0)
>> #define GOTO_ERR() do { LOG_ERR(); ret = -1; goto out; } while (0)
>>
>> Then use GOTO_ERR() when appropriate. It would also factorize
>> the printf.
>
> The downside of GOTO_ERR() is that it assumes a variable and a label
> name. And you may need to have multiple labels 'out0', 'out1', etc for
> the error path. How about:
>
> #define GOTO_ERR(ret, out) do { LOG_ERR(); ret = -1; goto out; } while (0)
Yep, looks better indeed.
Thanks,
Olivier
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] mempool: user-owned mempool caches
2016-06-16 11:02 [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches Lazaros Koromilas
` (3 preceding siblings ...)
2016-06-17 10:36 ` [dpdk-dev] [PATCH v3 0/3] mempool: " Olivier Matz
@ 2016-06-27 15:50 ` Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 1/3] mempool: deprecate specific get/put functions Olivier Matz
` (4 more replies)
4 siblings, 5 replies; 21+ messages in thread
From: Olivier Matz @ 2016-06-27 15:50 UTC (permalink / raw)
To: dev; +Cc: l
Updated version of the user-owned cache patchset. It applies on top of
the latest external mempool manager patches from David Hunt [1].
[1] http://dpdk.org/ml/archives/dev/2016-June/041479.html
v4 changes:
* Fix compilation with shared libraries
* Add a GOTO_ERR() macro to factorize code in test_mempool.c
* Change title of patch 2 to conform to check-git-log.sh
v3 changes:
* Deprecate specific mempool API calls instead of removing them.
* Split deprecation into a separate commit to limit noise.
* Fix cache flush by setting cache->len = 0 and make it inline.
* Remove cache->size == 0 checks and ensure size != 0 at creation.
* Fix tests to check if cache creation succeeded.
* Fix tests to free allocated resources on error.
The mempool cache is only available to EAL threads as a per-lcore
resource. Change this so that the user can create and provide their own
cache on mempool get and put operations. This works with non-EAL threads
too.
Also, deprecate the explicit {mp,sp}_put and {mc,sc}_get calls and
re-route them through the new generic calls. Minor cleanup to pass the
mempool bit flags instead of using specific is_mp and is_mc. The old
cache-oblivious API calls use the per-lcore default local cache. The
mempool and mempool_perf tests are also updated to handle the
user-owned cache case.
Introduced API calls:
rte_mempool_cache_create(size, socket_id)
rte_mempool_cache_free(cache)
rte_mempool_cache_flush(cache, mp)
rte_mempool_default_cache(mp, lcore_id)
rte_mempool_generic_put(mp, obj_table, n, cache, flags)
rte_mempool_generic_get(mp, obj_table, n, cache, flags)
Deprecated API calls:
rte_mempool_mp_put_bulk(mp, obj_table, n)
rte_mempool_sp_put_bulk(mp, obj_table, n)
rte_mempool_mp_put(mp, obj)
rte_mempool_sp_put(mp, obj)
rte_mempool_mc_get_bulk(mp, obj_table, n)
rte_mempool_sc_get_bulk(mp, obj_table, n)
rte_mempool_mc_get(mp, obj_p)
rte_mempool_sc_get(mp, obj_p)
Lazaros Koromilas (3):
mempool: deprecate specific get/put functions
mempool: use bit flags to set multi consumers or producers
mempool: allow for user-owned mempool caches
app/test/test_mempool.c | 85 +++++++---
app/test/test_mempool_perf.c | 70 ++++++--
lib/librte_mempool/rte_mempool.c | 66 +++++++-
lib/librte_mempool/rte_mempool.h | 256 +++++++++++++++++++++--------
lib/librte_mempool/rte_mempool_version.map | 4 +
5 files changed, 371 insertions(+), 110 deletions(-)
--
2.8.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] mempool: deprecate specific get/put functions
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
@ 2016-06-27 15:50 ` Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 2/3] mempool: use bit flags to set multi consumers or producers Olivier Matz
` (3 subsequent siblings)
4 siblings, 0 replies; 21+ messages in thread
From: Olivier Matz @ 2016-06-27 15:50 UTC (permalink / raw)
To: dev; +Cc: l
From: Lazaros Koromilas <l@nofutznetworks.com>
This commit introduces the API calls:
rte_mempool_generic_put(mp, obj_table, n, is_mp)
rte_mempool_generic_get(mp, obj_table, n, is_mc)
Deprecates the API calls:
rte_mempool_mp_put_bulk(mp, obj_table, n)
rte_mempool_sp_put_bulk(mp, obj_table, n)
rte_mempool_mp_put(mp, obj)
rte_mempool_sp_put(mp, obj)
rte_mempool_mc_get_bulk(mp, obj_table, n)
rte_mempool_sc_get_bulk(mp, obj_table, n)
rte_mempool_mc_get(mp, obj_p)
rte_mempool_sc_get(mp, obj_p)
We also check cookies in one place now.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/test_mempool.c | 10 ++--
lib/librte_mempool/rte_mempool.h | 115 +++++++++++++++++++++++++++------------
2 files changed, 85 insertions(+), 40 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 31582d8..55c2cbc 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -338,7 +338,7 @@ static int test_mempool_single_producer(void)
printf("obj not owned by this mempool\n");
RET_ERR();
}
- rte_mempool_sp_put(mp_spsc, obj);
+ rte_mempool_put(mp_spsc, obj);
rte_spinlock_lock(&scsp_spinlock);
scsp_obj_table[i] = NULL;
rte_spinlock_unlock(&scsp_spinlock);
@@ -371,7 +371,7 @@ static int test_mempool_single_consumer(void)
rte_spinlock_unlock(&scsp_spinlock);
if (i >= MAX_KEEP)
continue;
- if (rte_mempool_sc_get(mp_spsc, &obj) < 0)
+ if (rte_mempool_get(mp_spsc, &obj) < 0)
break;
rte_spinlock_lock(&scsp_spinlock);
scsp_obj_table[i] = obj;
@@ -477,13 +477,13 @@ test_mempool_basic_ex(struct rte_mempool *mp)
}
for (i = 0; i < MEMPOOL_SIZE; i ++) {
- if (rte_mempool_mc_get(mp, &obj[i]) < 0) {
+ if (rte_mempool_get(mp, &obj[i]) < 0) {
printf("test_mp_basic_ex fail to get object for [%u]\n",
i);
goto fail_mp_basic_ex;
}
}
- if (rte_mempool_mc_get(mp, &err_obj) == 0) {
+ if (rte_mempool_get(mp, &err_obj) == 0) {
printf("test_mempool_basic_ex get an impossible obj\n");
goto fail_mp_basic_ex;
}
@@ -494,7 +494,7 @@ test_mempool_basic_ex(struct rte_mempool *mp)
}
for (i = 0; i < MEMPOOL_SIZE; i++)
- rte_mempool_mp_put(mp, obj[i]);
+ rte_mempool_put(mp, obj[i]);
if (rte_mempool_full(mp) != 1) {
printf("test_mempool_basic_ex the mempool should be full\n");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0a1777c..a48f46d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -957,8 +957,8 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* Mono-producer (0) or multi-producers (1).
*/
static inline void __attribute__((always_inline))
-__mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n, int is_mp)
{
struct rte_mempool_cache *cache;
uint32_t index;
@@ -1016,7 +1016,7 @@ ring_enqueue:
/**
- * Put several objects back in the mempool (multi-producers safe).
+ * Put several objects back in the mempool.
*
* @param mp
* A pointer to the mempool structure.
@@ -1024,16 +1024,37 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
+ * @param is_mp
+ * Mono-producer (0) or multi-producers (1).
*/
static inline void __attribute__((always_inline))
+rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n, int is_mp)
+{
+ __mempool_check_cookies(mp, obj_table, n, 0);
+ __mempool_generic_put(mp, obj_table, n, is_mp);
+}
+
+/**
+ * @deprecated
+ * Put several objects back in the mempool (multi-producers safe).
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to add in the mempool from the obj_table.
+ */
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, 1);
+ rte_mempool_generic_put(mp, obj_table, n, 1);
}
/**
+ * @deprecated
* Put several objects back in the mempool (NOT multi-producers safe).
*
* @param mp
@@ -1043,12 +1064,11 @@ rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
* @param n
* The number of objects to add in the mempool from obj_table.
*/
-static inline void
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, 0);
+ rte_mempool_generic_put(mp, obj_table, n, 0);
}
/**
@@ -1069,11 +1089,12 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, !(mp->flags & MEMPOOL_F_SP_PUT));
+ rte_mempool_generic_put(mp, obj_table, n,
+ !(mp->flags & MEMPOOL_F_SP_PUT));
}
/**
+ * @deprecated
* Put one object in the mempool (multi-producers safe).
*
* @param mp
@@ -1081,13 +1102,14 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
* @param obj
* A pointer to the object to be added.
*/
-static inline void __attribute__((always_inline))
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_mp_put_bulk(mp, &obj, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 1);
}
/**
+ * @deprecated
* Put one object back in the mempool (NOT multi-producers safe).
*
* @param mp
@@ -1095,10 +1117,10 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
* @param obj
* A pointer to the object to be added.
*/
-static inline void __attribute__((always_inline))
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_sp_put_bulk(mp, &obj, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 0);
}
/**
@@ -1134,8 +1156,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* - <0: Error; code of ring dequeue function.
*/
static inline int __attribute__((always_inline))
-__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
- unsigned n, int is_mc)
+__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
+ unsigned n, int is_mc)
{
int ret;
struct rte_mempool_cache *cache;
@@ -1197,7 +1219,7 @@ ring_dequeue:
}
/**
- * Get several objects from the mempool (multi-consumers safe).
+ * Get several objects from the mempool.
*
* If cache is enabled, objects will be retrieved first from cache,
* subsequently from the common pool. Note that it can return -ENOENT when
@@ -1210,21 +1232,50 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
+ * @param is_mc
+ * Mono-consumer (0) or multi-consumers (1).
* @return
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
static inline int __attribute__((always_inline))
-rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
+ int is_mc)
{
int ret;
- ret = __mempool_get_bulk(mp, obj_table, n, 1);
+ ret = __mempool_generic_get(mp, obj_table, n, is_mc);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
}
/**
+ * @deprecated
+ * Get several objects from the mempool (multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ * The number of objects to get from mempool to obj_table.
+ * @return
+ * - 0: Success; objects taken.
+ * - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+__rte_deprecated static inline int __attribute__((always_inline))
+rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+ return rte_mempool_generic_get(mp, obj_table, n, 1);
+}
+
+/**
+ * @deprecated
* Get several objects from the mempool (NOT multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1243,14 +1294,10 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
* - -ENOENT: Not enough entries in the mempool; no object is
* retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- int ret;
- ret = __mempool_get_bulk(mp, obj_table, n, 0);
- if (ret == 0)
- __mempool_check_cookies(mp, obj_table, n, 1);
- return ret;
+ return rte_mempool_generic_get(mp, obj_table, n, 0);
}
/**
@@ -1278,15 +1325,12 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- int ret;
- ret = __mempool_get_bulk(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SC_GET));
- if (ret == 0)
- __mempool_check_cookies(mp, obj_table, n, 1);
- return ret;
+ return rte_mempool_generic_get(mp, obj_table, n,
+ !(mp->flags & MEMPOOL_F_SC_GET));
}
/**
+ * @deprecated
* Get one object from the mempool (multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1302,13 +1346,14 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_mc_get_bulk(mp, obj_p, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 1);
}
/**
+ * @deprecated
* Get one object from the mempool (NOT multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1324,10 +1369,10 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_sc_get_bulk(mp, obj_p, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 0);
}
/**
--
2.8.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] mempool: use bit flags to set multi consumers or producers
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 1/3] mempool: deprecate specific get/put functions Olivier Matz
@ 2016-06-27 15:50 ` Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 3/3] mempool: allow for user-owned mempool caches Olivier Matz
` (2 subsequent siblings)
4 siblings, 0 replies; 21+ messages in thread
From: Olivier Matz @ 2016-06-27 15:50 UTC (permalink / raw)
To: dev; +Cc: l
From: Lazaros Koromilas <l@nofutznetworks.com>
Pass the same flags as in rte_mempool_create(). Changes API calls:
rte_mempool_generic_put(mp, obj_table, n, flags)
rte_mempool_generic_get(mp, obj_table, n, flags)
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_mempool/rte_mempool.h | 58 +++++++++++++++++++++-------------------
1 file changed, 30 insertions(+), 28 deletions(-)
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a48f46d..971b1ba 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -953,12 +953,13 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* @param n
* The number of objects to store back in the mempool, must be strictly
* positive.
- * @param is_mp
- * Mono-producer (0) or multi-producers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+ unsigned n, int flags)
{
struct rte_mempool_cache *cache;
uint32_t index;
@@ -971,7 +972,7 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
__MEMPOOL_STAT_ADD(mp, put, n);
/* cache is not enabled or single producer or non-EAL thread */
- if (unlikely(cache_size == 0 || is_mp == 0 ||
+ if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SP_PUT ||
lcore_id >= RTE_MAX_LCORE))
goto ring_enqueue;
@@ -1024,15 +1025,16 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
- * @param is_mp
- * Mono-producer (0) or multi-producers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+ unsigned n, int flags)
{
__mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_generic_put(mp, obj_table, n, is_mp);
+ __mempool_generic_put(mp, obj_table, n, flags);
}
/**
@@ -1050,7 +1052,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 1);
+ rte_mempool_generic_put(mp, obj_table, n, 0);
}
/**
@@ -1068,7 +1070,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 0);
+ rte_mempool_generic_put(mp, obj_table, n, MEMPOOL_F_SP_PUT);
}
/**
@@ -1089,8 +1091,7 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SP_PUT));
+ rte_mempool_generic_put(mp, obj_table, n, mp->flags);
}
/**
@@ -1105,7 +1106,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 0);
}
/**
@@ -1120,7 +1121,7 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 0);
+ rte_mempool_generic_put(mp, &obj, 1, MEMPOOL_F_SP_PUT);
}
/**
@@ -1149,15 +1150,16 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to get, must be strictly positive.
- * @param is_mc
- * Mono-consumer (0) or multi-consumers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
* @return
* - >=0: Success; number of objects supplied.
* - <0: Error; code of ring dequeue function.
*/
static inline int __attribute__((always_inline))
__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
- unsigned n, int is_mc)
+ unsigned n, int flags)
{
int ret;
struct rte_mempool_cache *cache;
@@ -1167,7 +1169,7 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
uint32_t cache_size = mp->cache_size;
/* cache is not enabled or single consumer */
- if (unlikely(cache_size == 0 || is_mc == 0 ||
+ if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SC_GET ||
n >= cache_size || lcore_id >= RTE_MAX_LCORE))
goto ring_dequeue;
@@ -1232,18 +1234,19 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
- * @param is_mc
- * Mono-consumer (0) or multi-consumers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
* @return
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
static inline int __attribute__((always_inline))
rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
- int is_mc)
+ int flags)
{
int ret;
- ret = __mempool_generic_get(mp, obj_table, n, is_mc);
+ ret = __mempool_generic_get(mp, obj_table, n, flags);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
@@ -1271,7 +1274,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 1);
+ return rte_mempool_generic_get(mp, obj_table, n, 0);
}
/**
@@ -1297,7 +1300,7 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 0);
+ return rte_mempool_generic_get(mp, obj_table, n, MEMPOOL_F_SC_GET);
}
/**
@@ -1325,8 +1328,7 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SC_GET));
+ return rte_mempool_generic_get(mp, obj_table, n, mp->flags);
}
/**
@@ -1349,7 +1351,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 0);
}
/**
@@ -1372,7 +1374,7 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 0);
+ return rte_mempool_generic_get(mp, obj_p, 1, MEMPOOL_F_SC_GET);
}
/**
--
2.8.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] mempool: allow for user-owned mempool caches
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 1/3] mempool: deprecate specific get/put functions Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 2/3] mempool: use bit flags to set multi consumers or producers Olivier Matz
@ 2016-06-27 15:50 ` Olivier Matz
2016-06-28 17:20 ` Lazaros Koromilas
2016-06-27 15:52 ` [dpdk-dev] [PATCH v4 0/3] mempool: " Olivier MATZ
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 " Lazaros Koromilas
4 siblings, 1 reply; 21+ messages in thread
From: Olivier Matz @ 2016-06-27 15:50 UTC (permalink / raw)
To: dev; +Cc: l
From: Lazaros Koromilas <l@nofutznetworks.com>
The mempool cache is only available to EAL threads as a per-lcore
resource. Change this so that the user can create and provide their own
cache on mempool get and put operations. This works with non-EAL threads
too. This commit introduces the new API calls:
rte_mempool_cache_create(size, socket_id)
rte_mempool_cache_free(cache)
rte_mempool_cache_flush(cache, mp)
rte_mempool_default_cache(mp, lcore_id)
Changes the API calls:
rte_mempool_generic_put(mp, obj_table, n, cache, flags)
rte_mempool_generic_get(mp, obj_table, n, cache, flags)
The cache-oblivious API calls use the per-lcore default local cache.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/test_mempool.c | 75 +++++++++----
app/test/test_mempool_perf.c | 70 ++++++++++---
lib/librte_mempool/rte_mempool.c | 66 +++++++++++-
lib/librte_mempool/rte_mempool.h | 163 +++++++++++++++++++++--------
lib/librte_mempool/rte_mempool_version.map | 4 +
5 files changed, 296 insertions(+), 82 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 55c2cbc..5b3c754 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -75,10 +75,18 @@
#define MAX_KEEP 16
#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
-#define RET_ERR() do { \
+#define LOG_ERR() do { \
printf("test failed at %s():%d\n", __func__, __LINE__); \
+ } while (0)
+#define RET_ERR() do { \
+ LOG_ERR(); \
return -1; \
} while (0)
+#define GOTO_ERR(err, label) do { \
+ LOG_ERR(); \
+ ret = err; \
+ goto label; \
+ } while (0)
static rte_atomic32_t synchro;
@@ -191,7 +199,7 @@ my_obj_init(struct rte_mempool *mp, __attribute__((unused)) void *arg,
/* basic tests (done on one core) */
static int
-test_mempool_basic(struct rte_mempool *mp)
+test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
{
uint32_t *objnum;
void **objtable;
@@ -199,47 +207,60 @@ test_mempool_basic(struct rte_mempool *mp)
char *obj_data;
int ret = 0;
unsigned i, j;
+ int offset;
+ struct rte_mempool_cache *cache;
+
+ if (use_external_cache) {
+ /* Create a user-owned mempool cache. */
+ cache = rte_mempool_cache_create(RTE_MEMPOOL_CACHE_MAX_SIZE,
+ SOCKET_ID_ANY);
+ if (cache == NULL)
+ RET_ERR();
+ } else {
+ /* May be NULL if cache is disabled. */
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ }
/* dump the mempool status */
rte_mempool_dump(stdout, mp);
printf("get an object\n");
- if (rte_mempool_get(mp, &obj) < 0)
- RET_ERR();
+ if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+ GOTO_ERR(-1, out);
rte_mempool_dump(stdout, mp);
/* tests that improve coverage */
printf("get object count\n");
- if (rte_mempool_count(mp) != MEMPOOL_SIZE - 1)
- RET_ERR();
+ /* We have to count the extra caches, one in this case. */
+ offset = use_external_cache ? 1 * cache->len : 0;
+ if (rte_mempool_count(mp) + offset != MEMPOOL_SIZE - 1)
+ GOTO_ERR(-1, out);
printf("get private data\n");
if (rte_mempool_get_priv(mp) != (char *)mp +
MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
- RET_ERR();
+ GOTO_ERR(-1, out);
#ifndef RTE_EXEC_ENV_BSDAPP /* rte_mem_virt2phy() not supported on bsd */
printf("get physical address of an object\n");
if (rte_mempool_virt2phy(mp, obj) != rte_mem_virt2phy(obj))
- RET_ERR();
+ GOTO_ERR(-1, out);
#endif
printf("put the object back\n");
- rte_mempool_put(mp, obj);
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
rte_mempool_dump(stdout, mp);
printf("get 2 objects\n");
- if (rte_mempool_get(mp, &obj) < 0)
- RET_ERR();
- if (rte_mempool_get(mp, &obj2) < 0) {
- rte_mempool_put(mp, obj);
- RET_ERR();
- }
+ if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+ GOTO_ERR(-1, out);
+ if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0)
+ GOTO_ERR(-1, out);
rte_mempool_dump(stdout, mp);
printf("put the objects back\n");
- rte_mempool_put(mp, obj);
- rte_mempool_put(mp, obj2);
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+ rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
rte_mempool_dump(stdout, mp);
/*
@@ -248,10 +269,10 @@ test_mempool_basic(struct rte_mempool *mp)
*/
objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
if (objtable == NULL)
- RET_ERR();
+ GOTO_ERR(-1, out);
for (i = 0; i < MEMPOOL_SIZE; i++) {
- if (rte_mempool_get(mp, &objtable[i]) < 0)
+ if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
break;
}
@@ -273,13 +294,19 @@ test_mempool_basic(struct rte_mempool *mp)
ret = -1;
}
- rte_mempool_put(mp, objtable[i]);
+ rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
}
free(objtable);
if (ret == -1)
printf("objects were modified!\n");
+out:
+ if (use_external_cache) {
+ rte_mempool_cache_flush(cache, mp);
+ rte_mempool_cache_free(cache);
+ }
+
return ret;
}
@@ -631,11 +658,15 @@ test_mempool(void)
rte_mempool_list_dump(stdout);
/* basic tests without cache */
- if (test_mempool_basic(mp_nocache) < 0)
+ if (test_mempool_basic(mp_nocache, 0) < 0)
goto err;
/* basic tests with cache */
- if (test_mempool_basic(mp_cache) < 0)
+ if (test_mempool_basic(mp_cache, 0) < 0)
+ goto err;
+
+ /* basic tests with user-owned cache */
+ if (test_mempool_basic(mp_nocache, 1) < 0)
goto err;
/* more basic tests without cache */
diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5f8455..cb03cc6 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -78,6 +78,9 @@
* - One core without cache
* - Two cores without cache
* - Max. cores without cache
+ * - One core with user-owned cache
+ * - Two cores with user-owned cache
+ * - Max. cores with user-owned cache
*
* - Bulk size (*n_get_bulk*, *n_put_bulk*)
*
@@ -98,6 +101,8 @@
static struct rte_mempool *mp;
static struct rte_mempool *mp_cache, *mp_nocache;
+static int use_external_cache;
+static unsigned external_cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE;
static rte_atomic32_t synchro;
@@ -134,15 +139,31 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
void *obj_table[MAX_KEEP];
unsigned i, idx;
unsigned lcore_id = rte_lcore_id();
- int ret;
+ int ret = 0;
uint64_t start_cycles, end_cycles;
uint64_t time_diff = 0, hz = rte_get_timer_hz();
+ struct rte_mempool_cache *cache;
+
+ if (use_external_cache) {
+ /* Create a user-owned mempool cache. */
+ cache = rte_mempool_cache_create(external_cache_size,
+ SOCKET_ID_ANY);
+ if (cache == NULL)
+ return -1;
+ } else {
+ /* May be NULL if cache is disabled. */
+ cache = rte_mempool_default_cache(mp, lcore_id);
+ }
/* n_get_bulk and n_put_bulk must be divisors of n_keep */
- if (((n_keep / n_get_bulk) * n_get_bulk) != n_keep)
- return -1;
- if (((n_keep / n_put_bulk) * n_put_bulk) != n_keep)
- return -1;
+ if (((n_keep / n_get_bulk) * n_get_bulk) != n_keep) {
+ ret = -1;
+ goto out;
+ }
+ if (((n_keep / n_put_bulk) * n_put_bulk) != n_keep) {
+ ret = -1;
+ goto out;
+ }
stats[lcore_id].enq_count = 0;
@@ -157,12 +178,14 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
/* get n_keep objects by bulk of n_bulk */
idx = 0;
while (idx < n_keep) {
- ret = rte_mempool_get_bulk(mp, &obj_table[idx],
- n_get_bulk);
+ ret = rte_mempool_generic_get(mp, &obj_table[idx],
+ n_get_bulk,
+ cache, 0);
if (unlikely(ret < 0)) {
rte_mempool_dump(stdout, mp);
/* in this case, objects are lost... */
- return -1;
+ ret = -1;
+ goto out;
}
idx += n_get_bulk;
}
@@ -170,8 +193,9 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
/* put the objects back */
idx = 0;
while (idx < n_keep) {
- rte_mempool_put_bulk(mp, &obj_table[idx],
- n_put_bulk);
+ rte_mempool_generic_put(mp, &obj_table[idx],
+ n_put_bulk,
+ cache, 0);
idx += n_put_bulk;
}
}
@@ -180,7 +204,13 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
stats[lcore_id].enq_count += N;
}
- return 0;
+out:
+ if (use_external_cache) {
+ rte_mempool_cache_flush(cache, mp);
+ rte_mempool_cache_free(cache);
+ }
+
+ return ret;
}
/* launch all the per-lcore test, and display the result */
@@ -199,7 +229,9 @@ launch_cores(unsigned cores)
printf("mempool_autotest cache=%u cores=%u n_get_bulk=%u "
"n_put_bulk=%u n_keep=%u ",
- (unsigned) mp->cache_size, cores, n_get_bulk, n_put_bulk, n_keep);
+ use_external_cache ?
+ external_cache_size : (unsigned) mp->cache_size,
+ cores, n_get_bulk, n_put_bulk, n_keep);
if (rte_mempool_count(mp) != MEMPOOL_SIZE) {
printf("mempool is not full\n");
@@ -323,6 +355,20 @@ test_mempool_perf(void)
if (do_one_mempool_test(rte_lcore_count()) < 0)
return -1;
+ /* performance test with 1, 2 and max cores */
+ printf("start performance test (with user-owned cache)\n");
+ mp = mp_nocache;
+ use_external_cache = 1;
+
+ if (do_one_mempool_test(1) < 0)
+ return -1;
+
+ if (do_one_mempool_test(2) < 0)
+ return -1;
+
+ if (do_one_mempool_test(rte_lcore_count()) < 0)
+ return -1;
+
rte_mempool_list_dump(stdout);
return 0;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index e6a83d0..4f159fc 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -674,6 +674,53 @@ rte_mempool_free(struct rte_mempool *mp)
rte_memzone_free(mp->mz);
}
+static void
+mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size)
+{
+ cache->size = size;
+ cache->flushthresh = CALC_CACHE_FLUSHTHRESH(size);
+ cache->len = 0;
+}
+
+/*
+ * Create and initialize a cache for objects that are retrieved from and
+ * returned to an underlying mempool. This structure is identical to the
+ * local_cache[lcore_id] pointed to by the mempool structure.
+ */
+struct rte_mempool_cache *
+rte_mempool_cache_create(uint32_t size, int socket_id)
+{
+ struct rte_mempool_cache *cache;
+
+ if (size == 0 || size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (cache == NULL) {
+ RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ mempool_cache_init(cache, size);
+
+ return cache;
+}
+
+/*
+ * Free a cache. It's the responsibility of the user to make sure that any
+ * remaining objects in the cache are flushed to the corresponding
+ * mempool.
+ */
+void
+rte_mempool_cache_free(struct rte_mempool_cache *cache)
+{
+ rte_free(cache);
+}
+
/* create an empty mempool */
struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
@@ -688,6 +735,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
size_t mempool_size;
int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
struct rte_mempool_objsz objsz;
+ unsigned lcore_id;
int ret;
/* compilation-time checks */
@@ -768,8 +816,8 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mp->elt_size = objsz.elt_size;
mp->header_size = objsz.header_size;
mp->trailer_size = objsz.trailer_size;
+ /* Size of default caches, zero means disabled. */
mp->cache_size = cache_size;
- mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
mp->private_data_size = private_data_size;
STAILQ_INIT(&mp->elt_list);
STAILQ_INIT(&mp->mem_list);
@@ -781,6 +829,13 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mp->local_cache = (struct rte_mempool_cache *)
RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
+ /* Init all default caches. */
+ if (cache_size != 0) {
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
+ mempool_cache_init(&mp->local_cache[lcore_id],
+ cache_size);
+ }
+
te->data = mp;
rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
@@ -936,7 +991,7 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
unsigned count = 0;
unsigned cache_count;
- fprintf(f, " cache infos:\n");
+ fprintf(f, " internal cache infos:\n");
fprintf(f, " cache_size=%"PRIu32"\n", mp->cache_size);
if (mp->cache_size == 0)
@@ -944,7 +999,8 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
cache_count = mp->local_cache[lcore_id].len;
- fprintf(f, " cache_count[%u]=%u\n", lcore_id, cache_count);
+ fprintf(f, " cache_count[%u]=%"PRIu32"\n",
+ lcore_id, cache_count);
count += cache_count;
}
fprintf(f, " total_cache_count=%u\n", count);
@@ -1063,7 +1119,9 @@ mempool_audit_cache(const struct rte_mempool *mp)
return;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {
+ const struct rte_mempool_cache *cache;
+ cache = &mp->local_cache[lcore_id];
+ if (cache->len > cache->flushthresh) {
RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n",
lcore_id);
rte_panic("MEMPOOL: invalid cache len\n");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 971b1ba..a8724d7 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -101,7 +101,9 @@ struct rte_mempool_debug_stats {
* A structure that stores a per-core object cache.
*/
struct rte_mempool_cache {
- unsigned len; /**< Cache len */
+ uint32_t size; /**< Size of the cache */
+ uint32_t flushthresh; /**< Threshold before we flush excess elements */
+ uint32_t len; /**< Current cache count */
/*
* Cache is allocated to this size to allow it to overflow in certain
* cases to avoid needless emptying of cache.
@@ -213,9 +215,8 @@ struct rte_mempool {
int flags; /**< Flags of the mempool. */
int socket_id; /**< Socket id passed at create. */
uint32_t size; /**< Max size of the mempool. */
- uint32_t cache_size; /**< Size of per-lcore local cache. */
- uint32_t cache_flushthresh;
- /**< Threshold before we flush excess elements. */
+ uint32_t cache_size;
+ /**< Size of per-lcore default local cache. */
uint32_t elt_size; /**< Size of an element. */
uint32_t header_size; /**< Size of header (before elt). */
@@ -945,6 +946,70 @@ uint32_t rte_mempool_mem_iter(struct rte_mempool *mp,
void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
/**
+ * Create a user-owned mempool cache.
+ *
+ * This can be used by non-EAL threads to enable caching when they
+ * interact with a mempool.
+ *
+ * @param size
+ * The size of the mempool cache. See rte_mempool_create()'s cache_size
+ * parameter description for more information. The same limits and
+ * considerations apply here too.
+ * @param socket_id
+ * The socket identifier in the case of NUMA. The value can be
+ * SOCKET_ID_ANY if there is no NUMA constraint for the reserved zone.
+ */
+struct rte_mempool_cache *
+rte_mempool_cache_create(uint32_t size, int socket_id);
+
+/**
+ * Free a user-owned mempool cache.
+ *
+ * @param cache
+ * A pointer to the mempool cache.
+ */
+void
+rte_mempool_cache_free(struct rte_mempool_cache *cache);
+
+/**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ * A pointer to the mempool cache.
+ * @param mp
+ * A pointer to the mempool.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+ struct rte_mempool *mp)
+{
+ rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+ cache->len = 0;
+}
+
+/**
+ * Get a pointer to the per-lcore default mempool cache.
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param lcore_id
+ * The logical core id.
+ * @return
+ * A pointer to the mempool cache or NULL if disabled or non-EAL thread.
+ */
+static inline struct rte_mempool_cache * __attribute__((always_inline))
+rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
+{
+ if (mp->cache_size == 0)
+ return NULL;
+
+ if (lcore_id >= RTE_MAX_LCORE)
+ return NULL;
+
+ return &mp->local_cache[lcore_id];
+}
+
+/**
* @internal Put several objects back in the mempool; used internally.
* @param mp
* A pointer to the mempool structure.
@@ -953,34 +1018,30 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* @param n
* The number of objects to store back in the mempool, must be strictly
* positive.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
- struct rte_mempool_cache *cache;
uint32_t index;
void **cache_objs;
- unsigned lcore_id = rte_lcore_id();
- uint32_t cache_size = mp->cache_size;
- uint32_t flushthresh = mp->cache_flushthresh;
/* increment stat now, adding in mempool always success */
__MEMPOOL_STAT_ADD(mp, put, n);
- /* cache is not enabled or single producer or non-EAL thread */
- if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SP_PUT ||
- lcore_id >= RTE_MAX_LCORE))
+ /* No cache provided or single producer */
+ if (unlikely(cache == NULL || flags & MEMPOOL_F_SP_PUT))
goto ring_enqueue;
/* Go straight to ring if put would overflow mem allocated for cache */
if (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache = &mp->local_cache[lcore_id];
cache_objs = &cache->objs[cache->len];
/*
@@ -996,10 +1057,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
cache->len += n;
- if (cache->len >= flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
- cache->len - cache_size);
- cache->len = cache_size;
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
}
return;
@@ -1025,16 +1086,18 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
__mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_generic_put(mp, obj_table, n, flags);
+ __mempool_generic_put(mp, obj_table, n, cache, flags);
}
/**
@@ -1052,7 +1115,9 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, obj_table, n, cache, 0);
}
/**
@@ -1070,7 +1135,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, MEMPOOL_F_SP_PUT);
+ rte_mempool_generic_put(mp, obj_table, n, NULL, MEMPOOL_F_SP_PUT);
}
/**
@@ -1091,7 +1156,9 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, mp->flags);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
}
/**
@@ -1106,7 +1173,9 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
}
/**
@@ -1121,7 +1190,7 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, MEMPOOL_F_SP_PUT);
+ rte_mempool_generic_put(mp, &obj, 1, NULL, MEMPOOL_F_SP_PUT);
}
/**
@@ -1150,6 +1219,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to get, must be strictly positive.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
@@ -1159,27 +1230,23 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
*/
static inline int __attribute__((always_inline))
__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
int ret;
- struct rte_mempool_cache *cache;
uint32_t index, len;
void **cache_objs;
- unsigned lcore_id = rte_lcore_id();
- uint32_t cache_size = mp->cache_size;
- /* cache is not enabled or single consumer */
- if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SC_GET ||
- n >= cache_size || lcore_id >= RTE_MAX_LCORE))
+ /* No cache provided or single consumer */
+ if (unlikely(cache == NULL || flags & MEMPOOL_F_SC_GET ||
+ n >= cache->size))
goto ring_dequeue;
- cache = &mp->local_cache[lcore_id];
cache_objs = cache->objs;
/* Can this be satisfied from the cache? */
if (cache->len < n) {
/* No. Backfill the cache first, and then fill from it */
- uint32_t req = n + (cache_size - cache->len);
+ uint32_t req = n + (cache->size - cache->len);
/* How many do we require i.e. number to fill the cache + the request */
ret = rte_mempool_ops_dequeue_bulk(mp,
@@ -1234,6 +1301,8 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
@@ -1243,10 +1312,10 @@ ring_dequeue:
*/
static inline int __attribute__((always_inline))
rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
- int flags)
+ struct rte_mempool_cache *cache, int flags)
{
int ret;
- ret = __mempool_generic_get(mp, obj_table, n, flags);
+ ret = __mempool_generic_get(mp, obj_table, n, cache, flags);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
@@ -1274,7 +1343,9 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_table, n, cache, 0);
}
/**
@@ -1300,7 +1371,7 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, MEMPOOL_F_SC_GET);
+ return rte_mempool_generic_get(mp, obj_table, n, NULL, MEMPOOL_F_SC_GET);
}
/**
@@ -1328,7 +1399,9 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, mp->flags);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
}
/**
@@ -1351,7 +1424,9 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_p, 1, cache, 0);
}
/**
@@ -1374,7 +1449,7 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, MEMPOOL_F_SC_GET);
+ return rte_mempool_generic_get(mp, obj_p, 1, NULL, MEMPOOL_F_SC_GET);
}
/**
@@ -1408,7 +1483,7 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
*
* When cache is enabled, this function has to browse the length of
* all lcores, so it should not be used in a data path, but only for
- * debug purposes.
+ * debug purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1427,7 +1502,7 @@ unsigned rte_mempool_count(const struct rte_mempool *mp);
*
* When cache is enabled, this function has to browse the length of
* all lcores, so it should not be used in a data path, but only for
- * debug purposes.
+ * debug purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1445,7 +1520,7 @@ rte_mempool_free_count(const struct rte_mempool *mp)
*
* When cache is enabled, this function has to browse the length of all
* lcores, so it should not be used in a data path, but only for debug
- * purposes.
+ * purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1464,7 +1539,7 @@ rte_mempool_full(const struct rte_mempool *mp)
*
* When cache is enabled, this function has to browse the length of all
* lcores, so it should not be used in a data path, but only for debug
- * purposes.
+ * purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 9bcbf17..8e8eb6d 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -19,8 +19,12 @@ DPDK_2.0 {
DPDK_16.07 {
global:
+ rte_mempool_cache_create;
+ rte_mempool_cache_free;
+ rte_mempool_cache_flush;
rte_mempool_check_cookies;
rte_mempool_create_empty;
+ rte_mempool_default_cache;
rte_mempool_free;
rte_mempool_mem_iter;
rte_mempool_obj_iter;
--
2.8.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/3] mempool: user-owned mempool caches
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
` (2 preceding siblings ...)
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 3/3] mempool: allow for user-owned mempool caches Olivier Matz
@ 2016-06-27 15:52 ` Olivier MATZ
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 " Lazaros Koromilas
4 siblings, 0 replies; 21+ messages in thread
From: Olivier MATZ @ 2016-06-27 15:52 UTC (permalink / raw)
To: l; +Cc: dev
Hi Lazaros,
On 06/27/2016 05:50 PM, Olivier Matz wrote:
> Updated version of the user-owned cache patchset. It applies on top of
> the latest external mempool manager patches from David Hunt [1].
>
> [1] http://dpdk.org/ml/archives/dev/2016-June/041479.html
>
> v4 changes:
>
> * Fix compilation with shared libraries
> * Add a GOTO_ERR() macro to factorize code in test_mempool.c
> * Change title of patch 2 to conform to check-git-log.sh
As the rc1 is approaching, I submitted a v4 with some minor fixes.
Feel free to comment.
Regards,
Olivier
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] mempool: allow for user-owned mempool caches
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 3/3] mempool: allow for user-owned mempool caches Olivier Matz
@ 2016-06-28 17:20 ` Lazaros Koromilas
0 siblings, 0 replies; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-28 17:20 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev
Hi Olivier, thanks for fixing those, just one comment below
On Mon, Jun 27, 2016 at 4:50 PM, Olivier Matz <olivier.matz@6wind.com> wrote:
> From: Lazaros Koromilas <l@nofutznetworks.com>
>
> The mempool cache is only available to EAL threads as a per-lcore
> resource. Change this so that the user can create and provide their own
> cache on mempool get and put operations. This works with non-EAL threads
> too. This commit introduces the new API calls:
>
> rte_mempool_cache_create(size, socket_id)
> rte_mempool_cache_free(cache)
> rte_mempool_cache_flush(cache, mp)
> rte_mempool_default_cache(mp, lcore_id)
>
> Changes the API calls:
>
> rte_mempool_generic_put(mp, obj_table, n, cache, flags)
> rte_mempool_generic_get(mp, obj_table, n, cache, flags)
>
> The cache-oblivious API calls use the per-lcore default local cache.
>
> Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> app/test/test_mempool.c | 75 +++++++++----
> app/test/test_mempool_perf.c | 70 ++++++++++---
> lib/librte_mempool/rte_mempool.c | 66 +++++++++++-
> lib/librte_mempool/rte_mempool.h | 163 +++++++++++++++++++++--------
> lib/librte_mempool/rte_mempool_version.map | 4 +
> 5 files changed, 296 insertions(+), 82 deletions(-)
>
> diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
> index 55c2cbc..5b3c754 100644
> --- a/app/test/test_mempool.c
> +++ b/app/test/test_mempool.c
> @@ -75,10 +75,18 @@
> #define MAX_KEEP 16
> #define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
>
> -#define RET_ERR() do { \
> +#define LOG_ERR() do { \
> printf("test failed at %s():%d\n", __func__, __LINE__); \
> + } while (0)
> +#define RET_ERR() do { \
> + LOG_ERR(); \
> return -1; \
> } while (0)
> +#define GOTO_ERR(err, label) do { \
> + LOG_ERR(); \
> + ret = err; \
> + goto label; \
> + } while (0)
Here, GOTO_ERR still assumes a variable named ret in the function and
has the value as an argument while RET_ERR always returns -1. I'd
changed it to use -1:
#define GOTO_ERR(retvar, label) do { LOG_ERR(); retvar = -1; goto
label; } while (0)
Should I do it like that and also quickly add the documentation in a v5?
Thanks,
Lazaros.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] mempool: user-owned mempool caches
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
` (3 preceding siblings ...)
2016-06-27 15:52 ` [dpdk-dev] [PATCH v4 0/3] mempool: " Olivier MATZ
@ 2016-06-28 23:47 ` Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 1/3] mempool: deprecate specific get and put functions Lazaros Koromilas
` (3 more replies)
4 siblings, 4 replies; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-28 23:47 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz
Updated version of the user-owned cache patchset. It applies on top of
the latest v15 external mempool manager patches from David Hunt [1].
[1] http://dpdk.org/ml/archives/dev/2016-June/042004.html
v5 changes:
* Rework error path macros in tests.
* Update documentation.
* Style fixes.
v4 changes:
* Fix compilation with shared libraries
* Add a GOTO_ERR() macro to factorize code in test_mempool.c
* Change title of patch 2 to conform to check-git-log.sh
v3 changes:
* Deprecate specific mempool API calls instead of removing them.
* Split deprecation into a separate commit to limit noise.
* Fix cache flush by setting cache->len = 0 and make it inline.
* Remove cache->size == 0 checks and ensure size != 0 at creation.
* Fix tests to check if cache creation succeeded.
* Fix tests to free allocated resources on error.
The mempool cache is only available to EAL threads as a per-lcore
resource. Change this so that the user can create and provide their own
cache on mempool get and put operations. This works with non-EAL threads
too.
Also, deprecate the explicit {mp,sp}_put and {mc,sc}_get calls and
re-route them through the new generic calls. Minor cleanup to pass the
mempool bit flags instead of using specific is_mp and is_mc. The old
cache-oblivious API calls use the per-lcore default local cache. The
mempool and mempool_perf tests are also updated to handle the
user-owned cache case.
Introduced API calls:
rte_mempool_cache_create(size, socket_id)
rte_mempool_cache_free(cache)
rte_mempool_cache_flush(cache, mp)
rte_mempool_default_cache(mp, lcore_id)
rte_mempool_generic_put(mp, obj_table, n, cache, flags)
rte_mempool_generic_get(mp, obj_table, n, cache, flags)
Deprecated API calls:
rte_mempool_mp_put_bulk(mp, obj_table, n)
rte_mempool_sp_put_bulk(mp, obj_table, n)
rte_mempool_mp_put(mp, obj)
rte_mempool_sp_put(mp, obj)
rte_mempool_mc_get_bulk(mp, obj_table, n)
rte_mempool_sc_get_bulk(mp, obj_table, n)
rte_mempool_mc_get(mp, obj_p)
rte_mempool_sc_get(mp, obj_p)
Lazaros Koromilas (3):
mempool: deprecate specific get and put functions
mempool: use bit flags to set multi consumers and producers
mempool: allow for user-owned mempool caches
app/test/test_mempool.c | 83 +++++---
app/test/test_mempool_perf.c | 73 ++++++-
doc/guides/prog_guide/env_abstraction_layer.rst | 4 +-
doc/guides/prog_guide/mempool_lib.rst | 6 +-
lib/librte_mempool/rte_mempool.c | 66 +++++-
lib/librte_mempool/rte_mempool.h | 257 ++++++++++++++++++------
lib/librte_mempool/rte_mempool_version.map | 6 +
7 files changed, 385 insertions(+), 110 deletions(-)
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v5 1/3] mempool: deprecate specific get and put functions
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 " Lazaros Koromilas
@ 2016-06-28 23:47 ` Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 2/3] mempool: use bit flags to set multi consumers and producers Lazaros Koromilas
` (2 subsequent siblings)
3 siblings, 0 replies; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-28 23:47 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz
This commit introduces the API calls:
rte_mempool_generic_put(mp, obj_table, n, is_mp)
rte_mempool_generic_get(mp, obj_table, n, is_mc)
Deprecates the API calls:
rte_mempool_mp_put_bulk(mp, obj_table, n)
rte_mempool_sp_put_bulk(mp, obj_table, n)
rte_mempool_mp_put(mp, obj)
rte_mempool_sp_put(mp, obj)
rte_mempool_mc_get_bulk(mp, obj_table, n)
rte_mempool_sc_get_bulk(mp, obj_table, n)
rte_mempool_mc_get(mp, obj_p)
rte_mempool_sc_get(mp, obj_p)
We also check cookies in one place now.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/test_mempool.c | 10 +--
lib/librte_mempool/rte_mempool.h | 115 ++++++++++++++++++++---------
lib/librte_mempool/rte_mempool_version.map | 2 +
3 files changed, 87 insertions(+), 40 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 31582d8..55c2cbc 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -338,7 +338,7 @@ static int test_mempool_single_producer(void)
printf("obj not owned by this mempool\n");
RET_ERR();
}
- rte_mempool_sp_put(mp_spsc, obj);
+ rte_mempool_put(mp_spsc, obj);
rte_spinlock_lock(&scsp_spinlock);
scsp_obj_table[i] = NULL;
rte_spinlock_unlock(&scsp_spinlock);
@@ -371,7 +371,7 @@ static int test_mempool_single_consumer(void)
rte_spinlock_unlock(&scsp_spinlock);
if (i >= MAX_KEEP)
continue;
- if (rte_mempool_sc_get(mp_spsc, &obj) < 0)
+ if (rte_mempool_get(mp_spsc, &obj) < 0)
break;
rte_spinlock_lock(&scsp_spinlock);
scsp_obj_table[i] = obj;
@@ -477,13 +477,13 @@ test_mempool_basic_ex(struct rte_mempool *mp)
}
for (i = 0; i < MEMPOOL_SIZE; i ++) {
- if (rte_mempool_mc_get(mp, &obj[i]) < 0) {
+ if (rte_mempool_get(mp, &obj[i]) < 0) {
printf("test_mp_basic_ex fail to get object for [%u]\n",
i);
goto fail_mp_basic_ex;
}
}
- if (rte_mempool_mc_get(mp, &err_obj) == 0) {
+ if (rte_mempool_get(mp, &err_obj) == 0) {
printf("test_mempool_basic_ex get an impossible obj\n");
goto fail_mp_basic_ex;
}
@@ -494,7 +494,7 @@ test_mempool_basic_ex(struct rte_mempool *mp)
}
for (i = 0; i < MEMPOOL_SIZE; i++)
- rte_mempool_mp_put(mp, obj[i]);
+ rte_mempool_put(mp, obj[i]);
if (rte_mempool_full(mp) != 1) {
printf("test_mempool_basic_ex the mempool should be full\n");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0a1777c..a48f46d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -957,8 +957,8 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* Mono-producer (0) or multi-producers (1).
*/
static inline void __attribute__((always_inline))
-__mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n, int is_mp)
{
struct rte_mempool_cache *cache;
uint32_t index;
@@ -1016,7 +1016,7 @@ ring_enqueue:
/**
- * Put several objects back in the mempool (multi-producers safe).
+ * Put several objects back in the mempool.
*
* @param mp
* A pointer to the mempool structure.
@@ -1024,16 +1024,37 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
+ * @param is_mp
+ * Mono-producer (0) or multi-producers (1).
*/
static inline void __attribute__((always_inline))
+rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n, int is_mp)
+{
+ __mempool_check_cookies(mp, obj_table, n, 0);
+ __mempool_generic_put(mp, obj_table, n, is_mp);
+}
+
+/**
+ * @deprecated
+ * Put several objects back in the mempool (multi-producers safe).
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to add in the mempool from the obj_table.
+ */
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, 1);
+ rte_mempool_generic_put(mp, obj_table, n, 1);
}
/**
+ * @deprecated
* Put several objects back in the mempool (NOT multi-producers safe).
*
* @param mp
@@ -1043,12 +1064,11 @@ rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
* @param n
* The number of objects to add in the mempool from obj_table.
*/
-static inline void
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, 0);
+ rte_mempool_generic_put(mp, obj_table, n, 0);
}
/**
@@ -1069,11 +1089,12 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- __mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_put_bulk(mp, obj_table, n, !(mp->flags & MEMPOOL_F_SP_PUT));
+ rte_mempool_generic_put(mp, obj_table, n,
+ !(mp->flags & MEMPOOL_F_SP_PUT));
}
/**
+ * @deprecated
* Put one object in the mempool (multi-producers safe).
*
* @param mp
@@ -1081,13 +1102,14 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
* @param obj
* A pointer to the object to be added.
*/
-static inline void __attribute__((always_inline))
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_mp_put_bulk(mp, &obj, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 1);
}
/**
+ * @deprecated
* Put one object back in the mempool (NOT multi-producers safe).
*
* @param mp
@@ -1095,10 +1117,10 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
* @param obj
* A pointer to the object to be added.
*/
-static inline void __attribute__((always_inline))
+__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_sp_put_bulk(mp, &obj, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 0);
}
/**
@@ -1134,8 +1156,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* - <0: Error; code of ring dequeue function.
*/
static inline int __attribute__((always_inline))
-__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
- unsigned n, int is_mc)
+__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
+ unsigned n, int is_mc)
{
int ret;
struct rte_mempool_cache *cache;
@@ -1197,7 +1219,7 @@ ring_dequeue:
}
/**
- * Get several objects from the mempool (multi-consumers safe).
+ * Get several objects from the mempool.
*
* If cache is enabled, objects will be retrieved first from cache,
* subsequently from the common pool. Note that it can return -ENOENT when
@@ -1210,21 +1232,50 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
+ * @param is_mc
+ * Mono-consumer (0) or multi-consumers (1).
* @return
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
static inline int __attribute__((always_inline))
-rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
+ int is_mc)
{
int ret;
- ret = __mempool_get_bulk(mp, obj_table, n, 1);
+ ret = __mempool_generic_get(mp, obj_table, n, is_mc);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
}
/**
+ * @deprecated
+ * Get several objects from the mempool (multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ * The number of objects to get from mempool to obj_table.
+ * @return
+ * - 0: Success; objects taken.
+ * - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+__rte_deprecated static inline int __attribute__((always_inline))
+rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+ return rte_mempool_generic_get(mp, obj_table, n, 1);
+}
+
+/**
+ * @deprecated
* Get several objects from the mempool (NOT multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1243,14 +1294,10 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
* - -ENOENT: Not enough entries in the mempool; no object is
* retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- int ret;
- ret = __mempool_get_bulk(mp, obj_table, n, 0);
- if (ret == 0)
- __mempool_check_cookies(mp, obj_table, n, 1);
- return ret;
+ return rte_mempool_generic_get(mp, obj_table, n, 0);
}
/**
@@ -1278,15 +1325,12 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- int ret;
- ret = __mempool_get_bulk(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SC_GET));
- if (ret == 0)
- __mempool_check_cookies(mp, obj_table, n, 1);
- return ret;
+ return rte_mempool_generic_get(mp, obj_table, n,
+ !(mp->flags & MEMPOOL_F_SC_GET));
}
/**
+ * @deprecated
* Get one object from the mempool (multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1302,13 +1346,14 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_mc_get_bulk(mp, obj_p, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 1);
}
/**
+ * @deprecated
* Get one object from the mempool (NOT multi-consumers safe).
*
* If cache is enabled, objects will be retrieved first from cache,
@@ -1324,10 +1369,10 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
-static inline int __attribute__((always_inline))
+__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_sc_get_bulk(mp, obj_p, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 0);
}
/**
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 9bcbf17..6d4fc4a 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -22,6 +22,8 @@ DPDK_16.07 {
rte_mempool_check_cookies;
rte_mempool_create_empty;
rte_mempool_free;
+ rte_mempool_generic_get;
+ rte_mempool_generic_put;
rte_mempool_mem_iter;
rte_mempool_obj_iter;
rte_mempool_ops_table;
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v5 2/3] mempool: use bit flags to set multi consumers and producers
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 " Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 1/3] mempool: deprecate specific get and put functions Lazaros Koromilas
@ 2016-06-28 23:47 ` Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
2016-06-30 9:29 ` [dpdk-dev] [PATCH v5 0/3] mempool: " Thomas Monjalon
3 siblings, 0 replies; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-28 23:47 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz
Pass the same flags as in rte_mempool_create(). Changes API calls:
rte_mempool_generic_put(mp, obj_table, n, flags)
rte_mempool_generic_get(mp, obj_table, n, flags)
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_mempool/rte_mempool.h | 58 +++++++++++++++++++++-------------------
1 file changed, 30 insertions(+), 28 deletions(-)
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a48f46d..971b1ba 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -953,12 +953,13 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* @param n
* The number of objects to store back in the mempool, must be strictly
* positive.
- * @param is_mp
- * Mono-producer (0) or multi-producers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+ unsigned n, int flags)
{
struct rte_mempool_cache *cache;
uint32_t index;
@@ -971,7 +972,7 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
__MEMPOOL_STAT_ADD(mp, put, n);
/* cache is not enabled or single producer or non-EAL thread */
- if (unlikely(cache_size == 0 || is_mp == 0 ||
+ if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SP_PUT ||
lcore_id >= RTE_MAX_LCORE))
goto ring_enqueue;
@@ -1024,15 +1025,16 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
- * @param is_mp
- * Mono-producer (0) or multi-producers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int is_mp)
+ unsigned n, int flags)
{
__mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_generic_put(mp, obj_table, n, is_mp);
+ __mempool_generic_put(mp, obj_table, n, flags);
}
/**
@@ -1050,7 +1052,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 1);
+ rte_mempool_generic_put(mp, obj_table, n, 0);
}
/**
@@ -1068,7 +1070,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 0);
+ rte_mempool_generic_put(mp, obj_table, n, MEMPOOL_F_SP_PUT);
}
/**
@@ -1089,8 +1091,7 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SP_PUT));
+ rte_mempool_generic_put(mp, obj_table, n, mp->flags);
}
/**
@@ -1105,7 +1106,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 1);
+ rte_mempool_generic_put(mp, &obj, 1, 0);
}
/**
@@ -1120,7 +1121,7 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 0);
+ rte_mempool_generic_put(mp, &obj, 1, MEMPOOL_F_SP_PUT);
}
/**
@@ -1149,15 +1150,16 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to get, must be strictly positive.
- * @param is_mc
- * Mono-consumer (0) or multi-consumers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
* @return
* - >=0: Success; number of objects supplied.
* - <0: Error; code of ring dequeue function.
*/
static inline int __attribute__((always_inline))
__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
- unsigned n, int is_mc)
+ unsigned n, int flags)
{
int ret;
struct rte_mempool_cache *cache;
@@ -1167,7 +1169,7 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
uint32_t cache_size = mp->cache_size;
/* cache is not enabled or single consumer */
- if (unlikely(cache_size == 0 || is_mc == 0 ||
+ if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SC_GET ||
n >= cache_size || lcore_id >= RTE_MAX_LCORE))
goto ring_dequeue;
@@ -1232,18 +1234,19 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
- * @param is_mc
- * Mono-consumer (0) or multi-consumers (1).
+ * @param flags
+ * The flags used for the mempool creation.
+ * Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
* @return
* - 0: Success; objects taken.
* - -ENOENT: Not enough entries in the mempool; no object is retrieved.
*/
static inline int __attribute__((always_inline))
rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
- int is_mc)
+ int flags)
{
int ret;
- ret = __mempool_generic_get(mp, obj_table, n, is_mc);
+ ret = __mempool_generic_get(mp, obj_table, n, flags);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
@@ -1271,7 +1274,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 1);
+ return rte_mempool_generic_get(mp, obj_table, n, 0);
}
/**
@@ -1297,7 +1300,7 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 0);
+ return rte_mempool_generic_get(mp, obj_table, n, MEMPOOL_F_SC_GET);
}
/**
@@ -1325,8 +1328,7 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n,
- !(mp->flags & MEMPOOL_F_SC_GET));
+ return rte_mempool_generic_get(mp, obj_table, n, mp->flags);
}
/**
@@ -1349,7 +1351,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 1);
+ return rte_mempool_generic_get(mp, obj_p, 1, 0);
}
/**
@@ -1372,7 +1374,7 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 0);
+ return rte_mempool_generic_get(mp, obj_p, 1, MEMPOOL_F_SC_GET);
}
/**
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v5 3/3] mempool: allow for user-owned mempool caches
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 " Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 1/3] mempool: deprecate specific get and put functions Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 2/3] mempool: use bit flags to set multi consumers and producers Lazaros Koromilas
@ 2016-06-28 23:47 ` Lazaros Koromilas
2016-06-29 12:13 ` Olivier MATZ
2016-06-30 9:29 ` [dpdk-dev] [PATCH v5 0/3] mempool: " Thomas Monjalon
3 siblings, 1 reply; 21+ messages in thread
From: Lazaros Koromilas @ 2016-06-28 23:47 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz
The mempool cache is only available to EAL threads as a per-lcore
resource. Change this so that the user can create and provide their own
cache on mempool get and put operations. This works with non-EAL threads
too. This commit introduces the new API calls:
rte_mempool_cache_create(size, socket_id)
rte_mempool_cache_free(cache)
rte_mempool_cache_flush(cache, mp)
rte_mempool_default_cache(mp, lcore_id)
Changes the API calls:
rte_mempool_generic_put(mp, obj_table, n, cache, flags)
rte_mempool_generic_get(mp, obj_table, n, cache, flags)
The cache-oblivious API calls use the per-lcore default local cache.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/test_mempool.c | 73 ++++++++---
app/test/test_mempool_perf.c | 73 +++++++++--
doc/guides/prog_guide/env_abstraction_layer.rst | 4 +-
doc/guides/prog_guide/mempool_lib.rst | 6 +-
lib/librte_mempool/rte_mempool.c | 66 +++++++++-
lib/librte_mempool/rte_mempool.h | 164 +++++++++++++++++-------
lib/librte_mempool/rte_mempool_version.map | 4 +
7 files changed, 308 insertions(+), 82 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 55c2cbc..63c61f3 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -75,10 +75,16 @@
#define MAX_KEEP 16
#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
+#define LOG_ERR() printf("test failed at %s():%d\n", __func__, __LINE__)
#define RET_ERR() do { \
- printf("test failed at %s():%d\n", __func__, __LINE__); \
+ LOG_ERR(); \
return -1; \
} while (0)
+#define GOTO_ERR(var, label) do { \
+ LOG_ERR(); \
+ var = -1; \
+ goto label; \
+ } while (0)
static rte_atomic32_t synchro;
@@ -191,7 +197,7 @@ my_obj_init(struct rte_mempool *mp, __attribute__((unused)) void *arg,
/* basic tests (done on one core) */
static int
-test_mempool_basic(struct rte_mempool *mp)
+test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
{
uint32_t *objnum;
void **objtable;
@@ -199,47 +205,62 @@ test_mempool_basic(struct rte_mempool *mp)
char *obj_data;
int ret = 0;
unsigned i, j;
+ int offset;
+ struct rte_mempool_cache *cache;
+
+ if (use_external_cache) {
+ /* Create a user-owned mempool cache. */
+ cache = rte_mempool_cache_create(RTE_MEMPOOL_CACHE_MAX_SIZE,
+ SOCKET_ID_ANY);
+ if (cache == NULL)
+ RET_ERR();
+ } else {
+ /* May be NULL if cache is disabled. */
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ }
/* dump the mempool status */
rte_mempool_dump(stdout, mp);
printf("get an object\n");
- if (rte_mempool_get(mp, &obj) < 0)
- RET_ERR();
+ if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+ GOTO_ERR(ret, out);
rte_mempool_dump(stdout, mp);
/* tests that improve coverage */
printf("get object count\n");
- if (rte_mempool_count(mp) != MEMPOOL_SIZE - 1)
- RET_ERR();
+ /* We have to count the extra caches, one in this case. */
+ offset = use_external_cache ? 1 * cache->len : 0;
+ if (rte_mempool_count(mp) + offset != MEMPOOL_SIZE - 1)
+ GOTO_ERR(ret, out);
printf("get private data\n");
if (rte_mempool_get_priv(mp) != (char *)mp +
MEMPOOL_HEADER_SIZE(mp, mp->cache_size))
- RET_ERR();
+ GOTO_ERR(ret, out);
#ifndef RTE_EXEC_ENV_BSDAPP /* rte_mem_virt2phy() not supported on bsd */
printf("get physical address of an object\n");
if (rte_mempool_virt2phy(mp, obj) != rte_mem_virt2phy(obj))
- RET_ERR();
+ GOTO_ERR(ret, out);
#endif
printf("put the object back\n");
- rte_mempool_put(mp, obj);
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
rte_mempool_dump(stdout, mp);
printf("get 2 objects\n");
- if (rte_mempool_get(mp, &obj) < 0)
- RET_ERR();
- if (rte_mempool_get(mp, &obj2) < 0) {
- rte_mempool_put(mp, obj);
- RET_ERR();
+ if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+ GOTO_ERR(ret, out);
+ if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+ GOTO_ERR(ret, out);
}
rte_mempool_dump(stdout, mp);
printf("put the objects back\n");
- rte_mempool_put(mp, obj);
- rte_mempool_put(mp, obj2);
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+ rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
rte_mempool_dump(stdout, mp);
/*
@@ -248,10 +269,10 @@ test_mempool_basic(struct rte_mempool *mp)
*/
objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
if (objtable == NULL)
- RET_ERR();
+ GOTO_ERR(ret, out);
for (i = 0; i < MEMPOOL_SIZE; i++) {
- if (rte_mempool_get(mp, &objtable[i]) < 0)
+ if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
break;
}
@@ -273,13 +294,19 @@ test_mempool_basic(struct rte_mempool *mp)
ret = -1;
}
- rte_mempool_put(mp, objtable[i]);
+ rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
}
free(objtable);
if (ret == -1)
printf("objects were modified!\n");
+out:
+ if (use_external_cache) {
+ rte_mempool_cache_flush(cache, mp);
+ rte_mempool_cache_free(cache);
+ }
+
return ret;
}
@@ -631,11 +658,15 @@ test_mempool(void)
rte_mempool_list_dump(stdout);
/* basic tests without cache */
- if (test_mempool_basic(mp_nocache) < 0)
+ if (test_mempool_basic(mp_nocache, 0) < 0)
goto err;
/* basic tests with cache */
- if (test_mempool_basic(mp_cache) < 0)
+ if (test_mempool_basic(mp_cache, 0) < 0)
+ goto err;
+
+ /* basic tests with user-owned cache */
+ if (test_mempool_basic(mp_nocache, 1) < 0)
goto err;
/* more basic tests without cache */
diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5f8455..b80a1dd 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -78,6 +78,9 @@
* - One core without cache
* - Two cores without cache
* - Max. cores without cache
+ * - One core with user-owned cache
+ * - Two cores with user-owned cache
+ * - Max. cores with user-owned cache
*
* - Bulk size (*n_get_bulk*, *n_put_bulk*)
*
@@ -96,8 +99,21 @@
#define MAX_KEEP 128
#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
+#define LOG_ERR() printf("test failed at %s():%d\n", __func__, __LINE__)
+#define RET_ERR() do { \
+ LOG_ERR(); \
+ return -1; \
+ } while (0)
+#define GOTO_ERR(var, label) do { \
+ LOG_ERR(); \
+ var = -1; \
+ goto label; \
+ } while (0)
+
static struct rte_mempool *mp;
static struct rte_mempool *mp_cache, *mp_nocache;
+static int use_external_cache;
+static unsigned external_cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE;
static rte_atomic32_t synchro;
@@ -134,15 +150,27 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
void *obj_table[MAX_KEEP];
unsigned i, idx;
unsigned lcore_id = rte_lcore_id();
- int ret;
+ int ret = 0;
uint64_t start_cycles, end_cycles;
uint64_t time_diff = 0, hz = rte_get_timer_hz();
+ struct rte_mempool_cache *cache;
+
+ if (use_external_cache) {
+ /* Create a user-owned mempool cache. */
+ cache = rte_mempool_cache_create(external_cache_size,
+ SOCKET_ID_ANY);
+ if (cache == NULL)
+ RET_ERR();
+ } else {
+ /* May be NULL if cache is disabled. */
+ cache = rte_mempool_default_cache(mp, lcore_id);
+ }
/* n_get_bulk and n_put_bulk must be divisors of n_keep */
if (((n_keep / n_get_bulk) * n_get_bulk) != n_keep)
- return -1;
+ GOTO_ERR(ret, out);
if (((n_keep / n_put_bulk) * n_put_bulk) != n_keep)
- return -1;
+ GOTO_ERR(ret, out);
stats[lcore_id].enq_count = 0;
@@ -157,12 +185,14 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
/* get n_keep objects by bulk of n_bulk */
idx = 0;
while (idx < n_keep) {
- ret = rte_mempool_get_bulk(mp, &obj_table[idx],
- n_get_bulk);
+ ret = rte_mempool_generic_get(mp,
+ &obj_table[idx],
+ n_get_bulk,
+ cache, 0);
if (unlikely(ret < 0)) {
rte_mempool_dump(stdout, mp);
/* in this case, objects are lost... */
- return -1;
+ GOTO_ERR(ret, out);
}
idx += n_get_bulk;
}
@@ -170,8 +200,9 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
/* put the objects back */
idx = 0;
while (idx < n_keep) {
- rte_mempool_put_bulk(mp, &obj_table[idx],
- n_put_bulk);
+ rte_mempool_generic_put(mp, &obj_table[idx],
+ n_put_bulk,
+ cache, 0);
idx += n_put_bulk;
}
}
@@ -180,7 +211,13 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
stats[lcore_id].enq_count += N;
}
- return 0;
+out:
+ if (use_external_cache) {
+ rte_mempool_cache_flush(cache, mp);
+ rte_mempool_cache_free(cache);
+ }
+
+ return ret;
}
/* launch all the per-lcore test, and display the result */
@@ -199,7 +236,9 @@ launch_cores(unsigned cores)
printf("mempool_autotest cache=%u cores=%u n_get_bulk=%u "
"n_put_bulk=%u n_keep=%u ",
- (unsigned) mp->cache_size, cores, n_get_bulk, n_put_bulk, n_keep);
+ use_external_cache ?
+ external_cache_size : (unsigned) mp->cache_size,
+ cores, n_get_bulk, n_put_bulk, n_keep);
if (rte_mempool_count(mp) != MEMPOOL_SIZE) {
printf("mempool is not full\n");
@@ -323,6 +362,20 @@ test_mempool_perf(void)
if (do_one_mempool_test(rte_lcore_count()) < 0)
return -1;
+ /* performance test with 1, 2 and max cores */
+ printf("start performance test (with user-owned cache)\n");
+ mp = mp_nocache;
+ use_external_cache = 1;
+
+ if (do_one_mempool_test(1) < 0)
+ return -1;
+
+ if (do_one_mempool_test(2) < 0)
+ return -1;
+
+ if (do_one_mempool_test(rte_lcore_count()) < 0)
+ return -1;
+
rte_mempool_list_dump(stdout);
return 0;
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 4737dc2..4b9895e 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -322,8 +322,8 @@ Known Issues
The rte_mempool uses a per-lcore cache inside the mempool.
For non-EAL pthreads, ``rte_lcore_id()`` will not return a valid number.
- So for now, when rte_mempool is used with non-EAL pthreads, the put/get operations will bypass the mempool cache and there is a performance penalty because of this bypass.
- Support for non-EAL mempool cache is currently being enabled.
+ So for now, when rte_mempool is used with non-EAL pthreads, the put/get operations will bypass the default mempool cache and there is a performance penalty because of this bypass.
+ Only user-owned external caches can be used in a non-EAL context in conjunction with ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()`` that accept an explicit cache parameter.
+ rte_ring
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index 1943fc4..5946675 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -115,7 +115,7 @@ While this may mean a number of buffers may sit idle on some core's cache,
the speed at which a core can access its own cache for a specific memory pool without locks provides performance gains.
The cache is composed of a small, per-core table of pointers and its length (used as a stack).
-This cache can be enabled or disabled at creation of the pool.
+This internal cache can be enabled or disabled at creation of the pool.
The maximum size of the cache is static and is defined at compilation time (CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE).
@@ -127,6 +127,10 @@ The maximum size of the cache is static and is defined at compilation time (CONF
A mempool in Memory with its Associated Ring
+Alternatively to the internal default per-lcore local cache, an application can create and manage external caches through the ``rte_mempool_cache_create()``, ``rte_mempool_cache_free()`` and ``rte_mempool_cache_flush()`` calls.
+These user-owned caches can be explicitly passed to ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()``.
+The ``rte_mempool_default_cache()`` call returns the default internal cache if any.
+In contrast to the default caches, user-owned caches can be used by non-EAL threads too.
Mempool Handlers
------------------------
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index e6a83d0..4f159fc 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -674,6 +674,53 @@ rte_mempool_free(struct rte_mempool *mp)
rte_memzone_free(mp->mz);
}
+static void
+mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size)
+{
+ cache->size = size;
+ cache->flushthresh = CALC_CACHE_FLUSHTHRESH(size);
+ cache->len = 0;
+}
+
+/*
+ * Create and initialize a cache for objects that are retrieved from and
+ * returned to an underlying mempool. This structure is identical to the
+ * local_cache[lcore_id] pointed to by the mempool structure.
+ */
+struct rte_mempool_cache *
+rte_mempool_cache_create(uint32_t size, int socket_id)
+{
+ struct rte_mempool_cache *cache;
+
+ if (size == 0 || size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (cache == NULL) {
+ RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ mempool_cache_init(cache, size);
+
+ return cache;
+}
+
+/*
+ * Free a cache. It's the responsibility of the user to make sure that any
+ * remaining objects in the cache are flushed to the corresponding
+ * mempool.
+ */
+void
+rte_mempool_cache_free(struct rte_mempool_cache *cache)
+{
+ rte_free(cache);
+}
+
/* create an empty mempool */
struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
@@ -688,6 +735,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
size_t mempool_size;
int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
struct rte_mempool_objsz objsz;
+ unsigned lcore_id;
int ret;
/* compilation-time checks */
@@ -768,8 +816,8 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mp->elt_size = objsz.elt_size;
mp->header_size = objsz.header_size;
mp->trailer_size = objsz.trailer_size;
+ /* Size of default caches, zero means disabled. */
mp->cache_size = cache_size;
- mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
mp->private_data_size = private_data_size;
STAILQ_INIT(&mp->elt_list);
STAILQ_INIT(&mp->mem_list);
@@ -781,6 +829,13 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mp->local_cache = (struct rte_mempool_cache *)
RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
+ /* Init all default caches. */
+ if (cache_size != 0) {
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
+ mempool_cache_init(&mp->local_cache[lcore_id],
+ cache_size);
+ }
+
te->data = mp;
rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
@@ -936,7 +991,7 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
unsigned count = 0;
unsigned cache_count;
- fprintf(f, " cache infos:\n");
+ fprintf(f, " internal cache infos:\n");
fprintf(f, " cache_size=%"PRIu32"\n", mp->cache_size);
if (mp->cache_size == 0)
@@ -944,7 +999,8 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
cache_count = mp->local_cache[lcore_id].len;
- fprintf(f, " cache_count[%u]=%u\n", lcore_id, cache_count);
+ fprintf(f, " cache_count[%u]=%"PRIu32"\n",
+ lcore_id, cache_count);
count += cache_count;
}
fprintf(f, " total_cache_count=%u\n", count);
@@ -1063,7 +1119,9 @@ mempool_audit_cache(const struct rte_mempool *mp)
return;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {
+ const struct rte_mempool_cache *cache;
+ cache = &mp->local_cache[lcore_id];
+ if (cache->len > cache->flushthresh) {
RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n",
lcore_id);
rte_panic("MEMPOOL: invalid cache len\n");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 971b1ba..1963253 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -101,7 +101,9 @@ struct rte_mempool_debug_stats {
* A structure that stores a per-core object cache.
*/
struct rte_mempool_cache {
- unsigned len; /**< Cache len */
+ uint32_t size; /**< Size of the cache */
+ uint32_t flushthresh; /**< Threshold before we flush excess elements */
+ uint32_t len; /**< Current cache count */
/*
* Cache is allocated to this size to allow it to overflow in certain
* cases to avoid needless emptying of cache.
@@ -213,9 +215,8 @@ struct rte_mempool {
int flags; /**< Flags of the mempool. */
int socket_id; /**< Socket id passed at create. */
uint32_t size; /**< Max size of the mempool. */
- uint32_t cache_size; /**< Size of per-lcore local cache. */
- uint32_t cache_flushthresh;
- /**< Threshold before we flush excess elements. */
+ uint32_t cache_size;
+ /**< Size of per-lcore default local cache. */
uint32_t elt_size; /**< Size of an element. */
uint32_t header_size; /**< Size of header (before elt). */
@@ -945,6 +946,70 @@ uint32_t rte_mempool_mem_iter(struct rte_mempool *mp,
void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
/**
+ * Create a user-owned mempool cache.
+ *
+ * This can be used by non-EAL threads to enable caching when they
+ * interact with a mempool.
+ *
+ * @param size
+ * The size of the mempool cache. See rte_mempool_create()'s cache_size
+ * parameter description for more information. The same limits and
+ * considerations apply here too.
+ * @param socket_id
+ * The socket identifier in the case of NUMA. The value can be
+ * SOCKET_ID_ANY if there is no NUMA constraint for the reserved zone.
+ */
+struct rte_mempool_cache *
+rte_mempool_cache_create(uint32_t size, int socket_id);
+
+/**
+ * Free a user-owned mempool cache.
+ *
+ * @param cache
+ * A pointer to the mempool cache.
+ */
+void
+rte_mempool_cache_free(struct rte_mempool_cache *cache);
+
+/**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ * A pointer to the mempool cache.
+ * @param mp
+ * A pointer to the mempool.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+ struct rte_mempool *mp)
+{
+ rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+ cache->len = 0;
+}
+
+/**
+ * Get a pointer to the per-lcore default mempool cache.
+ *
+ * @param mp
+ * A pointer to the mempool structure.
+ * @param lcore_id
+ * The logical core id.
+ * @return
+ * A pointer to the mempool cache or NULL if disabled or non-EAL thread.
+ */
+static inline struct rte_mempool_cache *__attribute__((always_inline))
+rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
+{
+ if (mp->cache_size == 0)
+ return NULL;
+
+ if (lcore_id >= RTE_MAX_LCORE)
+ return NULL;
+
+ return &mp->local_cache[lcore_id];
+}
+
+/**
* @internal Put several objects back in the mempool; used internally.
* @param mp
* A pointer to the mempool structure.
@@ -953,34 +1018,30 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
* @param n
* The number of objects to store back in the mempool, must be strictly
* positive.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
- struct rte_mempool_cache *cache;
uint32_t index;
void **cache_objs;
- unsigned lcore_id = rte_lcore_id();
- uint32_t cache_size = mp->cache_size;
- uint32_t flushthresh = mp->cache_flushthresh;
/* increment stat now, adding in mempool always success */
__MEMPOOL_STAT_ADD(mp, put, n);
- /* cache is not enabled or single producer or non-EAL thread */
- if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SP_PUT ||
- lcore_id >= RTE_MAX_LCORE))
+ /* No cache provided or single producer */
+ if (unlikely(cache == NULL || flags & MEMPOOL_F_SP_PUT))
goto ring_enqueue;
/* Go straight to ring if put would overflow mem allocated for cache */
if (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache = &mp->local_cache[lcore_id];
cache_objs = &cache->objs[cache->len];
/*
@@ -996,10 +1057,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
cache->len += n;
- if (cache->len >= flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
- cache->len - cache_size);
- cache->len = cache_size;
+ if (cache->len >= cache->flushthresh) {
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
+ cache->len - cache->size);
+ cache->len = cache->size;
}
return;
@@ -1025,16 +1086,18 @@ ring_enqueue:
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to add in the mempool from the obj_table.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
*/
static inline void __attribute__((always_inline))
rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
__mempool_check_cookies(mp, obj_table, n, 0);
- __mempool_generic_put(mp, obj_table, n, flags);
+ __mempool_generic_put(mp, obj_table, n, cache, flags);
}
/**
@@ -1052,7 +1115,9 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, obj_table, n, cache, 0);
}
/**
@@ -1070,7 +1135,7 @@ __rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, MEMPOOL_F_SP_PUT);
+ rte_mempool_generic_put(mp, obj_table, n, NULL, MEMPOOL_F_SP_PUT);
}
/**
@@ -1091,7 +1156,9 @@ static inline void __attribute__((always_inline))
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
unsigned n)
{
- rte_mempool_generic_put(mp, obj_table, n, mp->flags);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
}
/**
@@ -1106,7 +1173,9 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ rte_mempool_generic_put(mp, &obj, 1, cache, 0);
}
/**
@@ -1121,7 +1190,7 @@ rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
__rte_deprecated static inline void __attribute__((always_inline))
rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
{
- rte_mempool_generic_put(mp, &obj, 1, MEMPOOL_F_SP_PUT);
+ rte_mempool_generic_put(mp, &obj, 1, NULL, MEMPOOL_F_SP_PUT);
}
/**
@@ -1150,6 +1219,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
* A pointer to a table of void * pointers (objects).
* @param n
* The number of objects to get, must be strictly positive.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
@@ -1159,27 +1230,23 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
*/
static inline int __attribute__((always_inline))
__mempool_generic_get(struct rte_mempool *mp, void **obj_table,
- unsigned n, int flags)
+ unsigned n, struct rte_mempool_cache *cache, int flags)
{
int ret;
- struct rte_mempool_cache *cache;
uint32_t index, len;
void **cache_objs;
- unsigned lcore_id = rte_lcore_id();
- uint32_t cache_size = mp->cache_size;
- /* cache is not enabled or single consumer */
- if (unlikely(cache_size == 0 || flags & MEMPOOL_F_SC_GET ||
- n >= cache_size || lcore_id >= RTE_MAX_LCORE))
+ /* No cache provided or single consumer */
+ if (unlikely(cache == NULL || flags & MEMPOOL_F_SC_GET ||
+ n >= cache->size))
goto ring_dequeue;
- cache = &mp->local_cache[lcore_id];
cache_objs = cache->objs;
/* Can this be satisfied from the cache? */
if (cache->len < n) {
/* No. Backfill the cache first, and then fill from it */
- uint32_t req = n + (cache_size - cache->len);
+ uint32_t req = n + (cache->size - cache->len);
/* How many do we require i.e. number to fill the cache + the request */
ret = rte_mempool_ops_dequeue_bulk(mp,
@@ -1234,6 +1301,8 @@ ring_dequeue:
* A pointer to a table of void * pointers (objects) that will be filled.
* @param n
* The number of objects to get from mempool to obj_table.
+ * @param cache
+ * A pointer to a mempool cache structure. May be NULL if not needed.
* @param flags
* The flags used for the mempool creation.
* Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
@@ -1243,10 +1312,10 @@ ring_dequeue:
*/
static inline int __attribute__((always_inline))
rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
- int flags)
+ struct rte_mempool_cache *cache, int flags)
{
int ret;
- ret = __mempool_generic_get(mp, obj_table, n, flags);
+ ret = __mempool_generic_get(mp, obj_table, n, cache, flags);
if (ret == 0)
__mempool_check_cookies(mp, obj_table, n, 1);
return ret;
@@ -1274,7 +1343,9 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_table, n, cache, 0);
}
/**
@@ -1300,7 +1371,8 @@ rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, MEMPOOL_F_SC_GET);
+ return rte_mempool_generic_get(mp, obj_table, n, NULL,
+ MEMPOOL_F_SC_GET);
}
/**
@@ -1328,7 +1400,9 @@ rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
static inline int __attribute__((always_inline))
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
{
- return rte_mempool_generic_get(mp, obj_table, n, mp->flags);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
}
/**
@@ -1351,7 +1425,9 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, 0);
+ struct rte_mempool_cache *cache;
+ cache = rte_mempool_default_cache(mp, rte_lcore_id());
+ return rte_mempool_generic_get(mp, obj_p, 1, cache, 0);
}
/**
@@ -1374,7 +1450,7 @@ rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
__rte_deprecated static inline int __attribute__((always_inline))
rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
{
- return rte_mempool_generic_get(mp, obj_p, 1, MEMPOOL_F_SC_GET);
+ return rte_mempool_generic_get(mp, obj_p, 1, NULL, MEMPOOL_F_SC_GET);
}
/**
@@ -1408,7 +1484,7 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
*
* When cache is enabled, this function has to browse the length of
* all lcores, so it should not be used in a data path, but only for
- * debug purposes.
+ * debug purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1427,7 +1503,7 @@ unsigned rte_mempool_count(const struct rte_mempool *mp);
*
* When cache is enabled, this function has to browse the length of
* all lcores, so it should not be used in a data path, but only for
- * debug purposes.
+ * debug purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1445,7 +1521,7 @@ rte_mempool_free_count(const struct rte_mempool *mp)
*
* When cache is enabled, this function has to browse the length of all
* lcores, so it should not be used in a data path, but only for debug
- * purposes.
+ * purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
@@ -1464,7 +1540,7 @@ rte_mempool_full(const struct rte_mempool *mp)
*
* When cache is enabled, this function has to browse the length of all
* lcores, so it should not be used in a data path, but only for debug
- * purposes.
+ * purposes. User-owned mempool caches are not accounted for.
*
* @param mp
* A pointer to the mempool structure.
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 6d4fc4a..729ea97 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -19,8 +19,12 @@ DPDK_2.0 {
DPDK_16.07 {
global:
+ rte_mempool_cache_create;
+ rte_mempool_cache_flush;
+ rte_mempool_cache_free;
rte_mempool_check_cookies;
rte_mempool_create_empty;
+ rte_mempool_default_cache;
rte_mempool_free;
rte_mempool_generic_get;
rte_mempool_generic_put;
--
1.9.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/3] mempool: allow for user-owned mempool caches
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
@ 2016-06-29 12:13 ` Olivier MATZ
0 siblings, 0 replies; 21+ messages in thread
From: Olivier MATZ @ 2016-06-29 12:13 UTC (permalink / raw)
To: Lazaros Koromilas, dev, Thomas Monjalon
Hi Lazaros,
On 06/29/2016 01:47 AM, Lazaros Koromilas wrote:
> The mempool cache is only available to EAL threads as a per-lcore
> resource. Change this so that the user can create and provide their own
> cache on mempool get and put operations. This works with non-EAL threads
> too. This commit introduces the new API calls:
>
> rte_mempool_cache_create(size, socket_id)
> rte_mempool_cache_free(cache)
> rte_mempool_cache_flush(cache, mp)
> rte_mempool_default_cache(mp, lcore_id)
>
> Changes the API calls:
>
> rte_mempool_generic_put(mp, obj_table, n, cache, flags)
> rte_mempool_generic_get(mp, obj_table, n, cache, flags)
>
> The cache-oblivious API calls use the per-lcore default local cache.
>
> Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> app/test/test_mempool.c | 73 ++++++++---
> app/test/test_mempool_perf.c | 73 +++++++++--
> doc/guides/prog_guide/env_abstraction_layer.rst | 4 +-
> doc/guides/prog_guide/mempool_lib.rst | 6 +-
> lib/librte_mempool/rte_mempool.c | 66 +++++++++-
> lib/librte_mempool/rte_mempool.h | 164 +++++++++++++++++-------
> lib/librte_mempool/rte_mempool_version.map | 4 +
> 7 files changed, 308 insertions(+), 82 deletions(-)
>
Thanks Lazaros for the doc update, looks good to me.
Thomas, as discussed IRL, could you please remove the deprecation
notice and add the following note in release_16_07.rst when applying
the patches?
* **Added mempool external cache for non-EAL thread.**
Added new functions to create, free or flush a user-owned mempool
cache for non-EAL threads. Previously, cache was always disabled
on these threads.
Thanks,
Olivier
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/3] mempool: user-owned mempool caches
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 " Lazaros Koromilas
` (2 preceding siblings ...)
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
@ 2016-06-30 9:29 ` Thomas Monjalon
3 siblings, 0 replies; 21+ messages in thread
From: Thomas Monjalon @ 2016-06-30 9:29 UTC (permalink / raw)
To: Lazaros Koromilas; +Cc: dev, Olivier Matz
> Lazaros Koromilas (3):
> mempool: deprecate specific get and put functions
> mempool: use bit flags to set multi consumers and producers
> mempool: allow for user-owned mempool caches
Applied with release nots additions, thanks
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2016-06-30 9:29 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-16 11:02 [dpdk-dev] [PATCH v3 0/3] mempool: user-owned mempool caches Lazaros Koromilas
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 1/3] mempool: deprecate specific get/put functions Lazaros Koromilas
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 2/3] mempool: use bit flags instead of is_mp and is_mc Lazaros Koromilas
2016-06-17 10:36 ` Olivier Matz
2016-06-16 11:02 ` [dpdk-dev] [PATCH v3 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
2016-06-17 10:37 ` Olivier Matz
2016-06-18 16:15 ` Lazaros Koromilas
2016-06-20 7:36 ` Olivier Matz
2016-06-17 10:36 ` [dpdk-dev] [PATCH v3 0/3] mempool: " Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 " Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 1/3] mempool: deprecate specific get/put functions Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 2/3] mempool: use bit flags to set multi consumers or producers Olivier Matz
2016-06-27 15:50 ` [dpdk-dev] [PATCH v4 3/3] mempool: allow for user-owned mempool caches Olivier Matz
2016-06-28 17:20 ` Lazaros Koromilas
2016-06-27 15:52 ` [dpdk-dev] [PATCH v4 0/3] mempool: " Olivier MATZ
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 " Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 1/3] mempool: deprecate specific get and put functions Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 2/3] mempool: use bit flags to set multi consumers and producers Lazaros Koromilas
2016-06-28 23:47 ` [dpdk-dev] [PATCH v5 3/3] mempool: allow for user-owned mempool caches Lazaros Koromilas
2016-06-29 12:13 ` Olivier MATZ
2016-06-30 9:29 ` [dpdk-dev] [PATCH v5 0/3] mempool: " Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).