2021年12月22日 下午7:45,Morten Brørup <mb@smartsharesystems.com> 写道:From: Zhiheng Chen [mailto:chenzhiheng0227@gmail.com]
Sent: Wednesday, 22 December 2021 09.26
Compared to patch version 1, this version updates the descriptions
of underlying functions.
Some comments inline below, regarding the return value of success.
You should also update the description of the dequeue function prototype on line 451, so no one implements an alternative dequeue operation that returns anything else than -ENOBUFS as error value.
@Olivier, @Andrew: Do we want to impose this restriction on the API?
Otherwise, the patch should take the opposite direction, and update the descriptions of high level functions - i.e. rte_mempool_generic_get(), rte_mempool_get_bulk(), rte_mempool_get(), etc. - to reflect that any error value <0 can be returned, originating from the underlying function.
In rte_mempool_ring.c, the committer uses the symbol ENOBUFS to
describe the return value of function common_ring_sc_dequeue,
but in rte_mempool.h, the symbol ENOENT is used to describe
the return value of function rte_mempool_get. If the user of
dpdk uses the symbol ENOENT as the judgment condition of
the return value, it may cause some abnormal phenomena
in their own programs, such as when the mempool space is exhausted.
Fixes: ea5dd2744b90 ("mempool: cache optimisations")
Signed-off-by: Zhiheng Chen <chenzhiheng0227@gmail.com>
---
lib/mempool/rte_mempool.h | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1e7a3c1527..3b52bd6737 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -737,8 +737,8 @@ rte_mempool_ops_alloc(struct rte_mempool *mp);
* @param n
* Number of objects to get.
* @return
- * - 0: Success; got n objects.
- * - <0: Error; code of dequeue function.
+ * - >=0: Success; number of objects supplied.
+ * - -ENOBUFS: Not enough entries in the mempool; no object is
retrieved.
NAK regarding success: Return value 0 is correct, >=0 cannot happen.*/
static inline int
rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
@@ -1453,7 +1453,7 @@ rte_mempool_put(struct rte_mempool *mp, void
*obj)
* A pointer to a mempool cache structure. May be NULL if not
needed.
* @return
* - >=0: Success; number of objects supplied.
This is also wrong. It should be:
-* - >=0: Success; number of objects supplied.
+* - 0: Success; got n objects.- * - <0: Error; code of ring dequeue function.
+ * - -ENOBUFS: Not enough entries in the mempool; no object is
retrieved.
*/
static __rte_always_inline int
rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
@@ -1521,7 +1521,7 @@ rte_mempool_do_generic_get(struct rte_mempool
*mp, void **obj_table,
* Get several objects from the mempool.
*
* If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT
when
+ * subsequently from the common pool. Note that it can return -ENOBUFS
when
* the local cache and common pool are empty, even if cache from other
* lcores are full.
*
@@ -1534,8 +1534,8 @@ rte_mempool_do_generic_get(struct rte_mempool
*mp, void **obj_table,
* @param cache
* A pointer to a mempool cache structure. May be NULL if not
needed.
* @return
- * - 0: Success; objects taken.
- * - -ENOENT: Not enough entries in the mempool; no object is
retrieved.
+ * - >=0: Success; number of objects supplied.
+ * - -ENOBUFS: Not enough entries in the mempool; no object is
retrieved.
NAK regarding success: Return value 0 is correct, >=0 cannot happen.*/
static __rte_always_inline int
rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
@@ -1557,7 +1557,7 @@ rte_mempool_generic_get(struct rte_mempool *mp,
void **obj_table,
* mempool creation time (see flags).
*
* If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT
when
+ * subsequently from the common pool. Note that it can return -ENOBUFS
when
* the local cache and common pool are empty, even if cache from other
* lcores are full.
*
@@ -1568,8 +1568,8 @@ rte_mempool_generic_get(struct rte_mempool *mp,
void **obj_table,
* @param n
* The number of objects to get from the mempool to obj_table.
* @return
- * - 0: Success; objects taken
- * - -ENOENT: Not enough entries in the mempool; no object is
retrieved.
+ * - >=0: Success; number of objects supplied.
+ * - -ENOBUFS: Not enough entries in the mempool; no object is
retrieved.
NAK regarding success: Return value 0 is correct, >=0 cannot happen.*/
static __rte_always_inline int
rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
unsigned int n)
@@ -1588,7 +1588,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
**obj_table, unsigned int n)
* mempool creation (see flags).
*
* If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT
when
+ * subsequently from the common pool. Note that it can return -ENOBUFS
when
* the local cache and common pool are empty, even if cache from other
* lcores are full.
*
@@ -1598,7 +1598,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
**obj_table, unsigned int n)
* A pointer to a void * pointer (object) that will be filled.
* @return
* - 0: Success; objects taken.
- * - -ENOENT: Not enough entries in the mempool; no object is
retrieved.
+ * - -ENOBUFS: Not enough entries in the mempool; no object is
retrieved.
*/
static __rte_always_inline int
rte_mempool_get(struct rte_mempool *mp, void **obj_p)
--
2.32.0