DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v1] mempool: fix some errors in html api
@ 2023-07-03  6:17 Rma Ma
  2023-08-02  2:20 ` Rma Ma
  2023-08-02  8:58 ` Morten Brørup
  0 siblings, 2 replies; 4+ messages in thread
From: Rma Ma @ 2023-07-03  6:17 UTC (permalink / raw)
  To: dpdk-dev; +Cc: Olivier Matz, Andrew Rybchenko, Rma Ma

This patch fix some error descriptions of
return value in mempool api which affect in html api.

Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com>
---
 lib/mempool/rte_mempool.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 160975a7e7..d4d707533a 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1610,7 +1610,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
  * Get several objects from the mempool.
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1624,7 +1624,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
  *   A pointer to a mempool cache structure. May be NULL if not needed.
  * @return
  *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
@@ -1646,7 +1646,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  * mempool creation time (see flags).
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1658,7 +1658,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from the mempool to obj_table.
  * @return
  *   - 0: Success; objects taken
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
@@ -1677,7 +1677,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
  * mempool creation (see flags).
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1687,7 +1687,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
  *   A pointer to a void * pointer (object) that will be filled.
  * @return
  *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_get(struct rte_mempool *mp, void **obj_p)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v1] mempool: fix some errors in html api
  2023-07-03  6:17 [PATCH v1] mempool: fix some errors in html api Rma Ma
@ 2023-08-02  2:20 ` Rma Ma
  2023-08-02  8:58 ` Morten Brørup
  1 sibling, 0 replies; 4+ messages in thread
From: Rma Ma @ 2023-08-02  2:20 UTC (permalink / raw)
  To: dpdk-dev; +Cc: Olivier Matz, Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 4384 bytes --]

Hi,

I know, it's a simple, no-impact patch for dpdk functionality.
However, this can be misleading for app programs.
When the return value of the interface is judged against -ENOENT, this will cause the program to error out.
It also affects the presentation of the api documentation:
https://doc.dpdk.org/api/rte__mempool_8h.html#a0d326354d53ef5068d86a8b7d9ec2d61

I'm not sure if this needs to be fixed or not.



> Subject: [PATCH v1] mempool: fix some errors in html api

>

> This patch fix some error descriptions of return value in mempool api which

> affect in html api.

>

> Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com<mailto:rma.ma@jaguarmicro.com>>

> ---

>  lib/mempool/rte_mempool.h | 12 ++++++------

>  1 file changed, 6 insertions(+), 6 deletions(-)

>




Best wishes,

Rma

________________________________
From: Rma Ma
Sent: Monday, July 3, 2023 14:18
To: dpdk-dev <dev@dpdk.org>
Cc: Olivier Matz <olivier.matz@6wind.com>; Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; Rma Ma <rma.ma@jaguarmicro.com>
Subject: [PATCH v1] mempool: fix some errors in html api

This patch fix some error descriptions of
return value in mempool api which affect in html api.

Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com>
---
 lib/mempool/rte_mempool.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 160975a7e7..d4d707533a 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1610,7 +1610,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
  * Get several objects from the mempool.
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1624,7 +1624,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
  *   A pointer to a mempool cache structure. May be NULL if not needed.
  * @return
  *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
@@ -1646,7 +1646,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  * mempool creation time (see flags).
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1658,7 +1658,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from the mempool to obj_table.
  * @return
  *   - 0: Success; objects taken
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
@@ -1677,7 +1677,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
  * mempool creation (see flags).
  *
  * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
+ * subsequently from the common pool. Note that it can return -ENOBUFS when
  * the local cache and common pool are empty, even if cache from other
  * lcores are full.
  *
@@ -1687,7 +1687,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
  *   A pointer to a void * pointer (object) that will be filled.
  * @return
  *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_get(struct rte_mempool *mp, void **obj_p)
--
2.17.1


[-- Attachment #2: Type: text/html, Size: 9508 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [PATCH v1] mempool: fix some errors in html api
  2023-07-03  6:17 [PATCH v1] mempool: fix some errors in html api Rma Ma
  2023-08-02  2:20 ` Rma Ma
@ 2023-08-02  8:58 ` Morten Brørup
  2023-08-02  9:46   ` Rma Ma
  1 sibling, 1 reply; 4+ messages in thread
From: Morten Brørup @ 2023-08-02  8:58 UTC (permalink / raw)
  To: Rma Ma, dpdk-dev, Olivier Matz, Andrew Rybchenko
  Cc: Ashwin Sekhar T K, Pavan Nikhilesh, Harman Kalra, Hemant Agrawal,
	Sachin Saxena

+CC various mempool driver maintainers

> From: Rma Ma [mailto:rma.ma@jaguarmicro.com]
> Sent: Monday, 3 July 2023 08.18
> 
> This patch fix some error descriptions of
> return value in mempool api which affect in html api.
> 
> Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com>
> ---
>  lib/mempool/rte_mempool.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 160975a7e7..d4d707533a 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1610,7 +1610,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   * Get several objects from the mempool.
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1624,7 +1624,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   *   A pointer to a mempool cache structure. May be NULL if not needed.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> @@ -1646,7 +1646,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   * mempool creation time (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1658,7 +1658,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   *   The number of objects to get from the mempool to obj_table.
>   * @return
>   *   - 0: Success; objects taken
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int
> n)
> @@ -1677,7 +1677,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
> **obj_table, unsigned int n)
>   * mempool creation (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1687,7 +1687,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
> **obj_table, unsigned int n)
>   *   A pointer to a void * pointer (object) that will be filled.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get(struct rte_mempool *mp, void **obj_p)
> --
> 2.17.1

Unfortunately, it is not that simple...

Here is the list of where mempool drivers are registered:
https://elixir.bootlin.com/dpdk/v23.07/C/ident/RTE_MEMPOOL_REGISTER_OPS

Some of the mempool drivers return -ENOBUFS, e.g. the ring driver and the stack driver:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_mempool_ring.c#L145
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_mempool_ring.c#L48
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mempool_stack.c#L78
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mempool_stack.c#L59

However, I found one mempool driver (Marvell cnxk) that returns -ENOENT:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hwpool_ops.c#L261
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hwpool_ops.c#L69

And one mempool driver (Cavium OCTEON TX) returns -ENOMEM:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L194
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L111

One mempool driver (NXP dpaa) returns -ENOBUFS, or (if requesting too many objects) -1:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L351
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L225
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L257

And one mempool driver (NXP dpaa2) returns -ENOBUFS, or in rare cases -ENOENT or -1:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L471
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L373
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L336
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L342


The root cause for this misery is the general lack of documentation of callbacks in DPDK (not just in the mempool library!), which leaves the callback implementers unaware of what to return. E.g. the mempool driver enqueue and dequeue callback types have no descriptions of their return values:
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L467
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L473

So in theory, the mempool drivers are free to return any value... Initially, I didn't think any mempool drivers would return -1 and set rte_errno, but apparently they do (except the driver doesn't set rte_errno when returning -1)!

On a techboard meeting not long ago, Tyler mentioned the lack of consistency in return value conventions as an general annoyance. Now having looked at these mempool drivers, I realize that the situation is much worse than I would imagine in my worst nightmare!


So, how do we want to fix this:

1. Specify the allowed return values in the documentation for the mempool library callback types, and require the mempool drivers to follow the updated specification?
2. Update mempool API documentation to list the many currently possible error return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=-1))?
3. Update mempool API documentation to say something along the lines of "negative return value indicates error; rte_errno is not set if -1"?
4. Switch to a general convention of returning -1 and setting rte_errno in all DPDK APIs, starting with the mempool library? However; this would still require a documented list of possible rte_errno values set by the functions - essentially the problem would remain the same: "possible return values" vs. "possible rte_errno values".

Personally, I am in strong favor of any option that tightens the API specification and treats non-conformance as bugs.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v1] mempool: fix some errors in html api
  2023-08-02  8:58 ` Morten Brørup
@ 2023-08-02  9:46   ` Rma Ma
  0 siblings, 0 replies; 4+ messages in thread
From: Rma Ma @ 2023-08-02  9:46 UTC (permalink / raw)
  To: Morten Brørup, dpdk-dev, Olivier Matz, Andrew Rybchenko
  Cc: Ashwin Sekhar T K, Pavan Nikhilesh, Harman Kalra, Hemant Agrawal,
	Sachin Saxena

[-- Attachment #1: Type: text/plain, Size: 9118 bytes --]


>

> So, how do we want to fix this:

>

> 1. Specify the allowed return values in the documentation for the mempool

> library callback types, and require the mempool drivers to follow the updated

> specification?

> 2. Update mempool API documentation to list the many currently possible error

> return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=-1))?

> 3. Update mempool API documentation to say something along the lines of

> "negative return value indicates error; rte_errno is not set if -1"?

> 4. Switch to a general convention of returning -1 and setting rte_errno in all

> DPDK APIs, starting with the mempool library? However; this would still require a

> documented list of possible rte_errno values set by the functions - essentially

> the problem would remain the same: "possible return values" vs. "possible

> rte_errno values".

>

> Personally, I am in strong favor of any option that tightens the API specification

> and treats non-conformance as bugs.

Thanks for your detailed reply!

Couldn't agree more with your third option, which is much more generalized.



Best wishes,

Rma

________________________________
From: Morten Brørup <mb@smartsharesystems.com>
Sent: Wednesday, August 2, 2023 16:58
To: Rma Ma <rma.ma@jaguarmicro.com>; dpdk-dev <dev@dpdk.org>; Olivier Matz <olivier.matz@6wind.com>; Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: Ashwin Sekhar T K <asekhar@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>; Harman Kalra <hkalra@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@oss.nxp.com>
Subject: RE: [PATCH v1] mempool: fix some errors in html api

+CC various mempool driver maintainers

> From: Rma Ma [mailto:rma.ma@jaguarmicro.com]
> Sent: Monday, 3 July 2023 08.18
>
> This patch fix some error descriptions of
> return value in mempool api which affect in html api.
>
> Signed-off-by: Rma Ma <rma.ma@jaguarmicro.com>
> ---
>  lib/mempool/rte_mempool.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 160975a7e7..d4d707533a 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1610,7 +1610,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   * Get several objects from the mempool.
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1624,7 +1624,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   *   A pointer to a mempool cache structure. May be NULL if not needed.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> @@ -1646,7 +1646,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   * mempool creation time (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1658,7 +1658,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void
> **obj_table,
>   *   The number of objects to get from the mempool to obj_table.
>   * @return
>   *   - 0: Success; objects taken
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int
> n)
> @@ -1677,7 +1677,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
> **obj_table, unsigned int n)
>   * mempool creation (see flags).
>   *
>   * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
>   * the local cache and common pool are empty, even if cache from other
>   * lcores are full.
>   *
> @@ -1687,7 +1687,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void
> **obj_table, unsigned int n)
>   *   A pointer to a void * pointer (object) that will be filled.
>   * @return
>   *   - 0: Success; objects taken.
> - *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
>   */
>  static __rte_always_inline int
>  rte_mempool_get(struct rte_mempool *mp, void **obj_p)
> --
> 2.17.1

Unfortunately, it is not that simple...

Here is the list of where mempool drivers are registered:
https://elixir.bootlin.com/dpdk/v23.07/C/ident/RTE_MEMPOOL_REGISTER_OPS

Some of the mempool drivers return -ENOBUFS, e.g. the ring driver and the stack driver:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_mempool_ring.c#L145
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/ring/rte_mempool_ring.c#L48
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mempool_stack.c#L78
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/stack/rte_mempool_stack.c#L59

However, I found one mempool driver (Marvell cnxk) that returns -ENOENT:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hwpool_ops.c#L261
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/cnxk/cn10k_hwpool_ops.c#L69

And one mempool driver (Cavium OCTEON TX) returns -ENOMEM:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L194
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/octeontx/rte_mempool_octeontx.c#L111

One mempool driver (NXP dpaa) returns -ENOBUFS, or (if requesting too many objects) -1:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L351
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L225
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa/dpaa_mempool.c#L257

And one mempool driver (NXP dpaa2) returns -ENOBUFS, or in rare cases -ENOENT or -1:
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L471
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L373
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L336
https://elixir.bootlin.com/dpdk/v23.07/source/drivers/mempool/dpaa2/dpaa2_hw_mempool.c#L342


The root cause for this misery is the general lack of documentation of callbacks in DPDK (not just in the mempool library!), which leaves the callback implementers unaware of what to return. E.g. the mempool driver enqueue and dequeue callback types have no descriptions of their return values:
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L467
https://elixir.bootlin.com/dpdk/v23.07/source/lib/mempool/rte_mempool.h#L473

So in theory, the mempool drivers are free to return any value... Initially, I didn't think any mempool drivers would return -1 and set rte_errno, but apparently they do (except the driver doesn't set rte_errno when returning -1)!

On a techboard meeting not long ago, Tyler mentioned the lack of consistency in return value conventions as an general annoyance. Now having looked at these mempool drivers, I realize that the situation is much worse than I would imagine in my worst nightmare!


So, how do we want to fix this:

1. Specify the allowed return values in the documentation for the mempool library callback types, and require the mempool drivers to follow the updated specification?
2. Update mempool API documentation to list the many currently possible error return values (-ENOBUFS, -ENOENT, -ENOMEM, and -EPERM (=-1))?
3. Update mempool API documentation to say something along the lines of "negative return value indicates error; rte_errno is not set if -1"?
4. Switch to a general convention of returning -1 and setting rte_errno in all DPDK APIs, starting with the mempool library? However; this would still require a documented list of possible rte_errno values set by the functions - essentially the problem would remain the same: "possible return values" vs. "possible rte_errno values".

Personally, I am in strong favor of any option that tightens the API specification and treats non-conformance as bugs.


[-- Attachment #2: Type: text/html, Size: 17277 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-08-02  9:46 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-03  6:17 [PATCH v1] mempool: fix some errors in html api Rma Ma
2023-08-02  2:20 ` Rma Ma
2023-08-02  8:58 ` Morten Brørup
2023-08-02  9:46   ` Rma Ma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).