patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks
@ 2020-01-09 13:27 Olivier Matz
  2020-01-09 13:40 ` [dpdk-stable] [dpdk-dev] " David Marchand
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Olivier Matz @ 2020-01-09 13:27 UTC (permalink / raw)
  To: dev; +Cc: Andrew Rybchenko, Anatoly Burakov, stable

To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().

This case should not throw an error in
rte_mempool_populate_virt(). Instead, the area that is too small should
just be ignored.

To fix this issue, change the return value of
rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
so it can be ignored by the caller. As this would be an API change, add
a compat wrapper to keep the current API unchanged. The wrapper will be
removed for 20.11.

Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---

Is there a simple way to ensure that we won't forget to remove the
wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
it's not clear to me how.

 doc/guides/rel_notes/deprecation.rst |  4 ++++
 lib/librte_mempool/rte_mempool.c     | 28 +++++++++++++++++++++++-----
 lib/librte_mempool/rte_mempool.h     |  5 ++++-
 3 files changed, 31 insertions(+), 6 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index afa94b43e..b6e89d9a2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -86,3 +86,7 @@ Deprecation Notices
   to set new power environment if power environment was already initialized.
   In this case the function will return -1 unless the environment is unset first
   (using ``rte_power_unset_env``). Other function usage scenarios will not change.
+
+* mempool: starting from v20.11, rte_mempool_populate_iova() will
+  return -ENOBUFS instead of -EINVAL when there is not enough room to
+  store one object.
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 78d8eb941..bda361ce6 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
  * zone. Return the number of objects added, or a negative value
  * on error.
  */
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+static int
+__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
@@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
 
 	if (off > len) {
-		ret = -EINVAL;
+		ret = -ENOBUFS;
 		goto fail;
 	}
 
@@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 
 	/* not enough room to store one object */
 	if (i == 0) {
-		ret = -EINVAL;
+		ret = -ENOBUFS;
 		goto fail;
 	}
 
@@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	return ret;
 }
 
+/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
+int
+rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque)
+{
+	int ret;
+
+	ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+					opaque);
+	if (ret == -ENOBUFS)
+		ret = -EINVAL;
+
+	return ret;
+}
+
 static rte_iova_t
 get_iova(void *addr)
 {
@@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 				break;
 		}
 
-		ret = rte_mempool_populate_iova(mp, addr + off, iova,
+		ret = __rte_mempool_populate_iova(mp, addr + off, iova,
 			phys_len, free_cb, opaque);
+		if (ret == -ENOBUFS)
+			continue;
 		if (ret < 0)
 			goto fail;
 		/* no need to call the free callback for next chunks */
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index f81152af9..c08bb444f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
  * @return
  *   The number of objects added on success.
  *   On error, the chunk is not added in the memory list of the
- *   mempool and a negative errno is returned.
+ *   mempool and a negative errno is returned:
+ *   (-ENOBUFS): not enough room in chunk for one object.
+ *   (-ENOSPC): mempool is already populated.
+ *   (-ENOMEM): allocation failure.
  */
 int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
-- 
2.20.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
  2020-01-09 13:27 [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
@ 2020-01-09 13:40 ` David Marchand
  2020-01-09 13:46   ` Olivier Matz
  2020-01-09 13:52 ` [dpdk-stable] " Burakov, Anatoly
  2020-01-17 14:57 ` [dpdk-stable] [PATCH v2 0/3] " Olivier Matz
  2 siblings, 1 reply; 14+ messages in thread
From: David Marchand @ 2020-01-09 13:40 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Andrew Rybchenko, Anatoly Burakov, dpdk stable

On Thu, Jan 9, 2020 at 2:27 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> To populate a mempool with a virtual area, the mempool code calls
> rte_mempool_populate_iova() for each iova-contiguous area. It happens
> (rarely) that this area is too small to store one object. In this case,
> rte_mempool_populate_iova() returns an error, which is forwarded by
> rte_mempool_populate_virt().
>
> This case should not throw an error in
> rte_mempool_populate_virt(). Instead, the area that is too small should
> just be ignored.
>
> To fix this issue, change the return value of
> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> so it can be ignored by the caller. As this would be an API change, add
> a compat wrapper to keep the current API unchanged. The wrapper will be
> removed for 20.11.
>
> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> Cc: stable@dpdk.org
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>
> Is there a simple way to ensure that we won't forget to remove the
> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> it's not clear to me how.
>
>  doc/guides/rel_notes/deprecation.rst |  4 ++++
>  lib/librte_mempool/rte_mempool.c     | 28 +++++++++++++++++++++++-----
>  lib/librte_mempool/rte_mempool.h     |  5 ++++-
>  3 files changed, 31 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index afa94b43e..b6e89d9a2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -86,3 +86,7 @@ Deprecation Notices
>    to set new power environment if power environment was already initialized.
>    In this case the function will return -1 unless the environment is unset first
>    (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> +
> +* mempool: starting from v20.11, rte_mempool_populate_iova() will
> +  return -ENOBUFS instead of -EINVAL when there is not enough room to
> +  store one object.
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 78d8eb941..bda361ce6 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
>   * zone. Return the number of objects added, or a negative value
>   * on error.
>   */
> -int
> -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +static int
> +__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>         void *opaque)
>  {
> @@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>                 off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
>
>         if (off > len) {
> -               ret = -EINVAL;
> +               ret = -ENOBUFS;
>                 goto fail;
>         }
>
> @@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>
>         /* not enough room to store one object */
>         if (i == 0) {
> -               ret = -EINVAL;
> +               ret = -ENOBUFS;
>                 goto fail;
>         }
>
> @@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>         return ret;
>  }
>
> +/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
> +int
> +rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> +       void *opaque)
> +{
> +       int ret;
> +
> +       ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> +                                       opaque);
> +       if (ret == -ENOBUFS)
> +               ret = -EINVAL;
> +
> +       return ret;
> +}
> +
>  static rte_iova_t
>  get_iova(void *addr)
>  {
> @@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>                                 break;
>                 }
>
> -               ret = rte_mempool_populate_iova(mp, addr + off, iova,
> +               ret = __rte_mempool_populate_iova(mp, addr + off, iova,
>                         phys_len, free_cb, opaque);
> +               if (ret == -ENOBUFS)
> +                       continue;
>                 if (ret < 0)
>                         goto fail;
>                 /* no need to call the free callback for next chunks */
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index f81152af9..c08bb444f 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
>   * @return
>   *   The number of objects added on success.
>   *   On error, the chunk is not added in the memory list of the
> - *   mempool and a negative errno is returned.
> + *   mempool and a negative errno is returned:
> + *   (-ENOBUFS): not enough room in chunk for one object.
> + *   (-ENOSPC): mempool is already populated.
> + *   (-ENOMEM): allocation failure.

You can't update the doc before this function does return -ENOBUFS.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
  2020-01-09 13:40 ` [dpdk-stable] [dpdk-dev] " David Marchand
@ 2020-01-09 13:46   ` Olivier Matz
  0 siblings, 0 replies; 14+ messages in thread
From: Olivier Matz @ 2020-01-09 13:46 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Andrew Rybchenko, Anatoly Burakov, dpdk stable

On Thu, Jan 09, 2020 at 02:40:06PM +0100, David Marchand wrote:
> On Thu, Jan 9, 2020 at 2:27 PM Olivier Matz <olivier.matz@6wind.com> wrote:
> >
> > To populate a mempool with a virtual area, the mempool code calls
> > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > (rarely) that this area is too small to store one object. In this case,
> > rte_mempool_populate_iova() returns an error, which is forwarded by
> > rte_mempool_populate_virt().
> >
> > This case should not throw an error in
> > rte_mempool_populate_virt(). Instead, the area that is too small should
> > just be ignored.
> >
> > To fix this issue, change the return value of
> > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > so it can be ignored by the caller. As this would be an API change, add
> > a compat wrapper to keep the current API unchanged. The wrapper will be
> > removed for 20.11.
> >
> > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> >
> > Is there a simple way to ensure that we won't forget to remove the
> > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > it's not clear to me how.
> >
> >  doc/guides/rel_notes/deprecation.rst |  4 ++++
> >  lib/librte_mempool/rte_mempool.c     | 28 +++++++++++++++++++++++-----
> >  lib/librte_mempool/rte_mempool.h     |  5 ++++-
> >  3 files changed, 31 insertions(+), 6 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index afa94b43e..b6e89d9a2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -86,3 +86,7 @@ Deprecation Notices
> >    to set new power environment if power environment was already initialized.
> >    In this case the function will return -1 unless the environment is unset first
> >    (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> > +
> > +* mempool: starting from v20.11, rte_mempool_populate_iova() will
> > +  return -ENOBUFS instead of -EINVAL when there is not enough room to
> > +  store one object.
> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> > index 78d8eb941..bda361ce6 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
> >   * zone. Return the number of objects added, or a negative value
> >   * on error.
> >   */
> > -int
> > -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > +static int
> > +__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> >         void *opaque)
> >  {
> > @@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >                 off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
> >
> >         if (off > len) {
> > -               ret = -EINVAL;
> > +               ret = -ENOBUFS;
> >                 goto fail;
> >         }
> >
> > @@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >
> >         /* not enough room to store one object */
> >         if (i == 0) {
> > -               ret = -EINVAL;
> > +               ret = -ENOBUFS;
> >                 goto fail;
> >         }
> >
> > @@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >         return ret;
> >  }
> >
> > +/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
> > +int
> > +rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > +       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> > +       void *opaque)
> > +{
> > +       int ret;
> > +
> > +       ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> > +                                       opaque);
> > +       if (ret == -ENOBUFS)
> > +               ret = -EINVAL;
> > +
> > +       return ret;
> > +}
> > +
> >  static rte_iova_t
> >  get_iova(void *addr)
> >  {
> > @@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> >                                 break;
> >                 }
> >
> > -               ret = rte_mempool_populate_iova(mp, addr + off, iova,
> > +               ret = __rte_mempool_populate_iova(mp, addr + off, iova,
> >                         phys_len, free_cb, opaque);
> > +               if (ret == -ENOBUFS)
> > +                       continue;
> >                 if (ret < 0)
> >                         goto fail;
> >                 /* no need to call the free callback for next chunks */
> > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > index f81152af9..c08bb444f 100644
> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
> >   * @return
> >   *   The number of objects added on success.
> >   *   On error, the chunk is not added in the memory list of the
> > - *   mempool and a negative errno is returned.
> > + *   mempool and a negative errno is returned:
> > + *   (-ENOBUFS): not enough room in chunk for one object.
> > + *   (-ENOSPC): mempool is already populated.
> > + *   (-ENOMEM): allocation failure.
> 
> You can't update the doc before this function does return -ENOBUFS.

Indeed :)

It should be EINVAL instead of ENOBUFS above.
...and I should add a reminder to update the doc for 20.11

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks
  2020-01-09 13:27 [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
  2020-01-09 13:40 ` [dpdk-stable] [dpdk-dev] " David Marchand
@ 2020-01-09 13:52 ` Burakov, Anatoly
  2020-01-09 14:23   ` Olivier Matz
  2020-01-17 14:57 ` [dpdk-stable] [PATCH v2 0/3] " Olivier Matz
  2 siblings, 1 reply; 14+ messages in thread
From: Burakov, Anatoly @ 2020-01-09 13:52 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: Andrew Rybchenko, stable

On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> To populate a mempool with a virtual area, the mempool code calls
> rte_mempool_populate_iova() for each iova-contiguous area. It happens
> (rarely) that this area is too small to store one object. In this case,
> rte_mempool_populate_iova() returns an error, which is forwarded by
> rte_mempool_populate_virt().
> 
> This case should not throw an error in
> rte_mempool_populate_virt(). Instead, the area that is too small should
> just be ignored.
> 
> To fix this issue, change the return value of
> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> so it can be ignored by the caller. As this would be an API change, add
> a compat wrapper to keep the current API unchanged. The wrapper will be
> removed for 20.11.
> 
> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> 

The approach fixes the issue on my end, so

Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>

> Is there a simple way to ensure that we won't forget to remove the
> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> it's not clear to me how.
> 

Yes, i'd like to do better than "ah shur we won't forget pinky swear".

Can't we do this with ABI versioning? E.g.

rte_populate_iova_v20() ... returns EINVAL

rte_populate_iova_v21() ... returns ENOBUFS

I'm pretty sure, even if it doesn't break, it will still be more likely 
to not be forgotten because there's almost a guarantee that someone will 
grep for symbol versioning macros across the codebase around 20.11 
timeframe.

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks
  2020-01-09 13:52 ` [dpdk-stable] " Burakov, Anatoly
@ 2020-01-09 14:23   ` Olivier Matz
  2020-01-09 14:29     ` Burakov, Anatoly
  0 siblings, 1 reply; 14+ messages in thread
From: Olivier Matz @ 2020-01-09 14:23 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: dev, Andrew Rybchenko, stable

On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
> On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> > To populate a mempool with a virtual area, the mempool code calls
> > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > (rarely) that this area is too small to store one object. In this case,
> > rte_mempool_populate_iova() returns an error, which is forwarded by
> > rte_mempool_populate_virt().
> > 
> > This case should not throw an error in
> > rte_mempool_populate_virt(). Instead, the area that is too small should
> > just be ignored.
> > 
> > To fix this issue, change the return value of
> > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > so it can be ignored by the caller. As this would be an API change, add
> > a compat wrapper to keep the current API unchanged. The wrapper will be
> > removed for 20.11.
> > 
> > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> > 
> 
> The approach fixes the issue on my end, so
> 
> Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
> 
> > Is there a simple way to ensure that we won't forget to remove the
> > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > it's not clear to me how.
> > 
> 
> Yes, i'd like to do better than "ah shur we won't forget pinky swear".
> 
> Can't we do this with ABI versioning? E.g.
> 
> rte_populate_iova_v20() ... returns EINVAL
> 
> rte_populate_iova_v21() ... returns ENOBUFS
> 
> I'm pretty sure, even if it doesn't break, it will still be more likely to
> not be forgotten because there's almost a guarantee that someone will grep
> for symbol versioning macros across the codebase around 20.11 timeframe.

Without using symbol versionning, would this be ok too?

  int
  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
         void *opaque)
  {
         int ret;

         ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);

  #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
         if (ret == -ENOBUFS)
                 ret = -EINVAL;
  #endif

         return ret;
  }



> -- 
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks
  2020-01-09 14:23   ` Olivier Matz
@ 2020-01-09 14:29     ` Burakov, Anatoly
  2020-01-09 14:58       ` [dpdk-stable] [dpdk-dev] " Bruce Richardson
  0 siblings, 1 reply; 14+ messages in thread
From: Burakov, Anatoly @ 2020-01-09 14:29 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Andrew Rybchenko, stable

On 09-Jan-20 2:23 PM, Olivier Matz wrote:
> On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
>> On 09-Jan-20 1:27 PM, Olivier Matz wrote:
>>> To populate a mempool with a virtual area, the mempool code calls
>>> rte_mempool_populate_iova() for each iova-contiguous area. It happens
>>> (rarely) that this area is too small to store one object. In this case,
>>> rte_mempool_populate_iova() returns an error, which is forwarded by
>>> rte_mempool_populate_virt().
>>>
>>> This case should not throw an error in
>>> rte_mempool_populate_virt(). Instead, the area that is too small should
>>> just be ignored.
>>>
>>> To fix this issue, change the return value of
>>> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
>>> so it can be ignored by the caller. As this would be an API change, add
>>> a compat wrapper to keep the current API unchanged. The wrapper will be
>>> removed for 20.11.
>>>
>>> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>>> ---
>>>
>>
>> The approach fixes the issue on my end, so
>>
>> Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>
>>> Is there a simple way to ensure that we won't forget to remove the
>>> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
>>> it's not clear to me how.
>>>
>>
>> Yes, i'd like to do better than "ah shur we won't forget pinky swear".
>>
>> Can't we do this with ABI versioning? E.g.
>>
>> rte_populate_iova_v20() ... returns EINVAL
>>
>> rte_populate_iova_v21() ... returns ENOBUFS
>>
>> I'm pretty sure, even if it doesn't break, it will still be more likely to
>> not be forgotten because there's almost a guarantee that someone will grep
>> for symbol versioning macros across the codebase around 20.11 timeframe.
> 
> Without using symbol versionning, would this be ok too?
> 
>    int
>    rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>           rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>           void *opaque)
>    {
>           int ret;
> 
>           ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
> 
>    #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
>           if (ret == -ENOBUFS)
>                   ret = -EINVAL;
>    #endif
> 
>           return ret;
>    }
> 
> 

Well it would certainly work :) it just makes it more likely that this 
will be missed.

How about, we leave your patch as is, and then you submit another patch 
marked for [20.11] and mark it as Deferred straight away? This is 
probably the best method to not forget that i can think of, if you're so 
averse to symbol versioning :D

> 
>> -- 
>> Thanks,
>> Anatoly


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
  2020-01-09 14:29     ` Burakov, Anatoly
@ 2020-01-09 14:58       ` Bruce Richardson
  0 siblings, 0 replies; 14+ messages in thread
From: Bruce Richardson @ 2020-01-09 14:58 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: Olivier Matz, dev, Andrew Rybchenko, stable

On Thu, Jan 09, 2020 at 02:29:08PM +0000, Burakov, Anatoly wrote:
> On 09-Jan-20 2:23 PM, Olivier Matz wrote:
> > On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
> > > On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> > > > To populate a mempool with a virtual area, the mempool code calls
> > > > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > > > (rarely) that this area is too small to store one object. In this case,
> > > > rte_mempool_populate_iova() returns an error, which is forwarded by
> > > > rte_mempool_populate_virt().
> > > > 
> > > > This case should not throw an error in
> > > > rte_mempool_populate_virt(). Instead, the area that is too small should
> > > > just be ignored.
> > > > 
> > > > To fix this issue, change the return value of
> > > > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > > > so it can be ignored by the caller. As this would be an API change, add
> > > > a compat wrapper to keep the current API unchanged. The wrapper will be
> > > > removed for 20.11.
> > > > 
> > > > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > > > Cc: stable@dpdk.org
> > > > 
> > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > ---
> > > > 
> > > 
> > > The approach fixes the issue on my end, so
> > > 
> > > Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > 
> > > > Is there a simple way to ensure that we won't forget to remove the
> > > > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > > > it's not clear to me how.
> > > > 
> > > 
> > > Yes, i'd like to do better than "ah shur we won't forget pinky swear".
> > > 
> > > Can't we do this with ABI versioning? E.g.
> > > 
> > > rte_populate_iova_v20() ... returns EINVAL
> > > 
> > > rte_populate_iova_v21() ... returns ENOBUFS
> > > 
> > > I'm pretty sure, even if it doesn't break, it will still be more likely to
> > > not be forgotten because there's almost a guarantee that someone will grep
> > > for symbol versioning macros across the codebase around 20.11 timeframe.
> > 
> > Without using symbol versionning, would this be ok too?
> > 
> >    int
> >    rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >           rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> >           void *opaque)
> >    {
> >           int ret;
> > 
> >           ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
> > 
> >    #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
> >           if (ret == -ENOBUFS)
> >                   ret = -EINVAL;
> >    #endif
> > 
> >           return ret;
> >    }
> > 
> > 
> 
> Well it would certainly work :) it just makes it more likely that this will
> be missed.
> 
Depends on your definition of "work". The difference vs having the ABI
versioned code is that any application linked against 20.02...20.08 will
have the old behaviour and have to change to use 20.11. Using proper
versioning, any newly linked apps immediately get the old behaviour and the
compatiblity is provided only for those apps linked against 19.11.

Therefore, I think the ABI compatibility approach is still better, and
should not be that bad, given that I think all that needs to be done for
v20 is to call v21 and then convert an -ENOBUFS to -EINVAL as you do here.

__mempool_populate_iova_v20(....)
{
   ret = __mempool_populate_iova_v21(...);
   return ret == -ENOBUFS ? -EINVAL : ret;
}

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [dpdk-stable] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks
  2020-01-09 13:27 [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
  2020-01-09 13:40 ` [dpdk-stable] [dpdk-dev] " David Marchand
  2020-01-09 13:52 ` [dpdk-stable] " Burakov, Anatoly
@ 2020-01-17 14:57 ` Olivier Matz
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 1/3] " Olivier Matz
                     ` (3 more replies)
  2 siblings, 4 replies; 14+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
  To: dev
  Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
	Bruce Richardson, David Marchand, dpdk stable

rte_mempool_populate_virt() sometimes fail, when it calls
rte_mempool_populate_iova() with an area which is too small to store one
object. This should not be an error.

I prepared a v2 which implements an ABI compatibility through symbol
versioning, as suggested [1]. It looks a bit overkill to me, but it
was an interresting exercice.

v2 changes:

- The initial requirement is to fix an issue at mempool creation. As
  the fix will probably be backported in 19.11.x, the first patch
  in the patchset does not break API/ABI.

- Using symbol versioning helps to preserve ABI, but for the API
  the breakage has to be announced 1 release in advance, so there
  is a separate patch for this.

- There is a 3rd patch for 20.05 that makes the new API public and
  implements ABI versioning (if ok, I'll remove it from the patchset in
  next version and send it separately)

- It appears that returning -ENOBUFS instead of -EINVAL is not ideal
  because, in theory, mempool_ops_alloc_once() could also return
  -ENOBUFS, and it would be forwarded to the caller by
  rte_mempool_populate_iova() too, and misinterpreted as "there is not
  enough room".

  Returning 0 instead of -ENOBUFS was initially suggested by Anatoly,
  and it does not suffer from this problem. It is doable if we properly
  document that the memory chunk is not added to the mempool when
  returning 0. It has an impact on populate_virt(), which has to be
  versioned too.

There are some checkpatch warnings, but I'm not sure how if I
should solve them:
- it complains about forward declaration in .c, but without it, it does
  not compile due to additional warnings in cflags
- it complains that modified symbols should be marked as experimental

Thanks to David for helping me to test and fix the ABI part of the
patchset.

[1] http://patchwork.dpdk.org/patch/64369/

Olivier Matz (3):
  mempool: fix mempool virt populate with small chunks
  doc: announce API change for mempool IOVA populate
  mempool: return 0 if area is too small on populate

 doc/guides/rel_notes/deprecation.rst       |  5 ++
 examples/ntb/ntb_fwd.c                     |  2 +-
 lib/librte_mempool/meson.build             |  1 +
 lib/librte_mempool/rte_mempool.c           | 88 ++++++++++++++++++++--
 lib/librte_mempool/rte_mempool.h           | 14 +++-
 lib/librte_mempool/rte_mempool_version.map |  7 ++
 6 files changed, 105 insertions(+), 12 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [dpdk-stable] [PATCH v2 1/3] mempool: fix mempool virt populate with small chunks
  2020-01-17 14:57 ` [dpdk-stable] [PATCH v2 0/3] " Olivier Matz
@ 2020-01-17 14:57   ` Olivier Matz
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
  To: dev
  Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
	Bruce Richardson, David Marchand, dpdk stable

To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().

This case should not throw an error in
rte_mempool_populate_virt(). Instead, the area that is too small should
just be ignored.

To fix this issue, change the return value of
rte_mempool_populate_iova() to 0 when no object can be populated,
so it can be ignored by the caller. As this would be an API/ABI change,
only do this modification internally for now.

Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Zhang Alvin <alvinx.zhang@intel.com>
---
 lib/librte_mempool/rte_mempool.c | 30 +++++++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index f8d453d21..b434d9f17 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
  * zone. Return the number of objects added, or a negative value
  * on error.
  */
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+static int
+__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
@@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
 
 	if (off > len) {
-		ret = -EINVAL;
+		ret = 0;
 		goto fail;
 	}
 
@@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 
 	/* not enough room to store one object */
 	if (i == 0) {
-		ret = -EINVAL;
+		ret = 0;
 		goto fail;
 	}
 
@@ -356,6 +356,21 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	return ret;
 }
 
+int
+rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque)
+{
+	int ret;
+
+	ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+					opaque);
+	if (ret == 0)
+		ret = -EINVAL;
+
+	return ret;
+}
+
 static rte_iova_t
 get_iova(void *addr)
 {
@@ -406,8 +421,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 				break;
 		}
 
-		ret = rte_mempool_populate_iova(mp, addr + off, iova,
+		ret = __rte_mempool_populate_iova(mp, addr + off, iova,
 			phys_len, free_cb, opaque);
+		if (ret == 0)
+			continue;
 		if (ret < 0)
 			goto fail;
 		/* no need to call the free callback for next chunks */
@@ -415,6 +432,9 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 		cnt += ret;
 	}
 
+	if (cnt == 0)
+		return -EINVAL;
+
 	return cnt;
 
  fail:
-- 
2.20.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [dpdk-stable] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate
  2020-01-17 14:57 ` [dpdk-stable] [PATCH v2 0/3] " Olivier Matz
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 1/3] " Olivier Matz
@ 2020-01-17 14:57   ` Olivier Matz
  2020-01-17 20:32     ` David Marchand
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
  2020-01-20 12:02   ` [dpdk-stable] [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks Thomas Monjalon
  3 siblings, 1 reply; 14+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
  To: dev
  Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
	Bruce Richardson, David Marchand, dpdk stable

Starting from v20.05, rte_mempool_populate_iova() will return
-ENOBUFS. The ABI will be preserved through symbol versioning until
20.11.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index afa94b43e..e11c7f436 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -86,3 +86,8 @@ Deprecation Notices
   to set new power environment if power environment was already initialized.
   In this case the function will return -1 unless the environment is unset first
   (using ``rte_power_unset_env``). Other function usage scenarios will not change.
+
+* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
+  and rte_mempool_populate_virt() will change to return 0 instead
+  of -EINVAL when there is not enough room to store one object. The ABI
+  will be preserved until 20.11.
-- 
2.20.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [dpdk-stable] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate
  2020-01-17 14:57 ` [dpdk-stable] [PATCH v2 0/3] " Olivier Matz
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 1/3] " Olivier Matz
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
@ 2020-01-17 14:57   ` Olivier Matz
  2020-01-17 20:32     ` David Marchand
  2020-01-20 12:02   ` [dpdk-stable] [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks Thomas Monjalon
  3 siblings, 1 reply; 14+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
  To: dev
  Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
	Bruce Richardson, David Marchand, dpdk stable

Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
return 0 instead of -EINVAL when there is not enough room to store one
object, as it can be helpful for applications to distinguish this
specific case.

As this is an ABI change, use symbol versioning to preserve old
behavior for binary applications.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 examples/ntb/ntb_fwd.c                     |  2 +-
 lib/librte_mempool/meson.build             |  1 +
 lib/librte_mempool/rte_mempool.c           | 76 ++++++++++++++++++----
 lib/librte_mempool/rte_mempool.h           | 14 ++--
 lib/librte_mempool/rte_mempool_version.map |  7 ++
 5 files changed, 84 insertions(+), 16 deletions(-)

diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index c914256dd..99a43688c 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -1313,7 +1313,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
 					mz->len - ntb_info.ntb_hdr_size,
 					ntb_mempool_mz_free,
 					(void *)(uintptr_t)mz);
-		if (ret < 0) {
+		if (ret <= 0) {
 			rte_memzone_free(mz);
 			rte_mempool_free(mp);
 			return NULL;
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index f8710b61b..4fe267cd8 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -18,3 +18,4 @@ deps += ['ring']
 
 # memseg walk is not yet part of stable API
 allow_experimental_apis = true
+use_function_versioning = true
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b434d9f17..0ebb21eec 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -31,6 +31,7 @@
 #include <rte_string_fns.h>
 #include <rte_spinlock.h>
 #include <rte_tailq.h>
+#include <rte_function_versioning.h>
 
 #include "rte_mempool.h"
 
@@ -293,12 +294,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
 	return 0;
 }
 
+__vsym int
+rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
+	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque);
+
 /* Add objects in the pool, using a physically contiguous memory
  * zone. Return the number of objects added, or a negative value
  * on error.
  */
-static int
-__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+__vsym int
+rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
@@ -355,21 +361,34 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_free(memhdr);
 	return ret;
 }
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_1, 20.0.1);
+MAP_STATIC_SYMBOL(
+	int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+				rte_iova_t iova, size_t len,
+				rte_mempool_memchunk_free_cb_t *free_cb,
+				void *opaque),
+	rte_mempool_populate_iova_v20_0_1);
+
+__vsym int
+rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
+	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque);
 
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+__vsym int
+rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
 	int ret;
 
-	ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+	ret = rte_mempool_populate_iova_v20_0_1(mp, vaddr, iova, len, free_cb,
 					opaque);
 	if (ret == 0)
 		ret = -EINVAL;
 
 	return ret;
 }
+VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
 
 static rte_iova_t
 get_iova(void *addr)
@@ -384,11 +403,16 @@ get_iova(void *addr)
 	return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
 }
 
+__vsym int
+rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
+	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque);
+
 /* Populate the mempool with a virtual area. Return the number of
  * objects added, or a negative value on error.
  */
-int
-rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
+__vsym int
+rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
 	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
@@ -421,7 +445,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 				break;
 		}
 
-		ret = __rte_mempool_populate_iova(mp, addr + off, iova,
+		ret = rte_mempool_populate_iova_v20_0_1(mp, addr + off, iova,
 			phys_len, free_cb, opaque);
 		if (ret == 0)
 			continue;
@@ -432,15 +456,41 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 		cnt += ret;
 	}
 
-	if (cnt == 0)
-		return -EINVAL;
-
 	return cnt;
 
  fail:
 	rte_mempool_free_memchunks(mp);
 	return ret;
 }
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_1, 20.0.1);
+MAP_STATIC_SYMBOL(
+	int rte_mempool_populate_virt(struct rte_mempool *mp,
+				char *addr, size_t len, size_t pg_sz,
+				rte_mempool_memchunk_free_cb_t *free_cb,
+				void *opaque),
+	rte_mempool_populate_virt_v20_0_1);
+
+__vsym int
+rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
+	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque);
+
+__vsym int
+rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
+	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+	void *opaque)
+{
+	int ret;
+
+	ret = rte_mempool_populate_virt_v20_0_1(mp, addr, len, pg_sz,
+						free_cb, opaque);
+
+	if (ret == 0)
+		ret = -EINVAL;
+
+	return ret;
+}
+VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
 
 /* Get the minimal page size used in a mempool before populating it. */
 int
@@ -601,6 +651,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 				mz->len, pg_sz,
 				rte_mempool_memchunk_mz_free,
 				(void *)(uintptr_t)mz);
+		if (ret == 0) /* should not happen */
+			ret = -ENOBUFS;
 		if (ret < 0) {
 			rte_memzone_free(mz);
 			goto fail;
@@ -692,6 +744,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
 
 	ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
 		rte_mempool_memchunk_anon_free, addr);
+	if (ret == 0) /* should not happen */
+		ret = -ENOBUFS;
 	if (ret < 0) {
 		rte_errno = -ret;
 		goto fail;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0a1dc6059..5a106a432 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1106,9 +1106,12 @@ rte_mempool_free(struct rte_mempool *mp);
  * @param opaque
  *   An opaque argument passed to free_cb.
  * @return
- *   The number of objects added on success.
+ *   The number of objects added on success (strictly positive).
  *   On error, the chunk is not added in the memory list of the
- *   mempool and a negative errno is returned.
+ *   mempool the following code is returned:
+ *     (0): not enough room in chunk for one object.
+ *     (-ENOSPC): mempool is already populated.
+ *     (-ENOMEM): allocation failure.
  */
 int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -1133,9 +1136,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
  * @param opaque
  *   An opaque argument passed to free_cb.
  * @return
- *   The number of objects added on success.
+ *   The number of objects added on success (strictly positive).
  *   On error, the chunk is not added in the memory list of the
- *   mempool and a negative errno is returned.
+ *   mempool the following code is returned:
+ *     (0): not enough room in chunk for one object.
+ *     (-ENOSPC): mempool is already populated.
+ *     (-ENOMEM): allocation failure.
  */
 int
 rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index d002dfc46..34d977e47 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -35,6 +35,13 @@ DPDK_20.0 {
 	local: *;
 };
 
+DPDK_20.0.1 {
+	global:
+
+	rte_mempool_populate_iova;
+	rte_mempool_populate_virt;
+} DPDK_20.0;
+
 EXPERIMENTAL {
 	global:
 
-- 
2.20.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
@ 2020-01-17 20:32     ` David Marchand
  0 siblings, 0 replies; 14+ messages in thread
From: David Marchand @ 2020-01-17 20:32 UTC (permalink / raw)
  To: Olivier Matz
  Cc: dev, Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
	Bruce Richardson, dpdk stable

On Fri, Jan 17, 2020 at 3:58 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
> return 0 instead of -EINVAL when there is not enough room to store one
> object, as it can be helpful for applications to distinguish this
> specific case.
>
> As this is an ABI change, use symbol versioning to preserve old
> behavior for binary applications.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  examples/ntb/ntb_fwd.c                     |  2 +-
>  lib/librte_mempool/meson.build             |  1 +
>  lib/librte_mempool/rte_mempool.c           | 76 ++++++++++++++++++----
>  lib/librte_mempool/rte_mempool.h           | 14 ++--
>  lib/librte_mempool/rte_mempool_version.map |  7 ++
>  5 files changed, 84 insertions(+), 16 deletions(-)
>
> diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
> index c914256dd..99a43688c 100644
> --- a/examples/ntb/ntb_fwd.c
> +++ b/examples/ntb/ntb_fwd.c
> @@ -1313,7 +1313,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
>                                         mz->len - ntb_info.ntb_hdr_size,
>                                         ntb_mempool_mz_free,
>                                         (void *)(uintptr_t)mz);
> -               if (ret < 0) {
> +               if (ret <= 0) {
>                         rte_memzone_free(mz);
>                         rte_mempool_free(mp);
>                         return NULL;
> diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
> index f8710b61b..4fe267cd8 100644
> --- a/lib/librte_mempool/meson.build
> +++ b/lib/librte_mempool/meson.build
> @@ -18,3 +18,4 @@ deps += ['ring']
>
>  # memseg walk is not yet part of stable API
>  allow_experimental_apis = true
> +use_function_versioning = true
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index b434d9f17..0ebb21eec 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -31,6 +31,7 @@
>  #include <rte_string_fns.h>
>  #include <rte_spinlock.h>
>  #include <rte_tailq.h>
> +#include <rte_function_versioning.h>
>
>  #include "rte_mempool.h"
>
> @@ -293,12 +294,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
>         return 0;
>  }
>
> +__vsym int
> +rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
> +       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> +       void *opaque);
> +

For 20.05, this should be 20.0.2.



>  /* Add objects in the pool, using a physically contiguous memory
>   * zone. Return the number of objects added, or a negative value
>   * on error.
>   */
> -static int
> -__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +__vsym int
> +rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
>         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>         void *opaque)
>  {
> @@ -355,21 +361,34 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>         rte_free(memhdr);
>         return ret;
>  }
> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_1, 20.0.1);
> +MAP_STATIC_SYMBOL(
> +       int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +                               rte_iova_t iova, size_t len,
> +                               rte_mempool_memchunk_free_cb_t *free_cb,
> +                               void *opaque),
> +       rte_mempool_populate_iova_v20_0_1);
> +
> +__vsym int
> +rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
> +       rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> +       void *opaque);
>
> -int
> -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +__vsym int
> +rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
>         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>         void *opaque)
>  {
>         int ret;
>
> -       ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> +       ret = rte_mempool_populate_iova_v20_0_1(mp, vaddr, iova, len, free_cb,
>                                         opaque);
>         if (ret == 0)
>                 ret = -EINVAL;
>
>         return ret;
>  }
> +VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
>
>  static rte_iova_t
>  get_iova(void *addr)
> @@ -384,11 +403,16 @@ get_iova(void *addr)
>         return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
>  }
>
> +__vsym int
> +rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
> +       size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> +       void *opaque);
> +
>  /* Populate the mempool with a virtual area. Return the number of
>   * objects added, or a negative value on error.
>   */
> -int
> -rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> +__vsym int
> +rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
>         size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
>         void *opaque)
>  {
> @@ -421,7 +445,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>                                 break;
>                 }
>
> -               ret = __rte_mempool_populate_iova(mp, addr + off, iova,
> +               ret = rte_mempool_populate_iova_v20_0_1(mp, addr + off, iova,
>                         phys_len, free_cb, opaque);
>                 if (ret == 0)
>                         continue;
> @@ -432,15 +456,41 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>                 cnt += ret;
>         }
>
> -       if (cnt == 0)
> -               return -EINVAL;
> -
>         return cnt;
>
>   fail:
>         rte_mempool_free_memchunks(mp);
>         return ret;
>  }
> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_1, 20.0.1);
> +MAP_STATIC_SYMBOL(
> +       int rte_mempool_populate_virt(struct rte_mempool *mp,
> +                               char *addr, size_t len, size_t pg_sz,
> +                               rte_mempool_memchunk_free_cb_t *free_cb,
> +                               void *opaque),
> +       rte_mempool_populate_virt_v20_0_1);
> +
> +__vsym int
> +rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
> +       size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> +       void *opaque);
> +
> +__vsym int
> +rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
> +       size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> +       void *opaque)
> +{
> +       int ret;
> +
> +       ret = rte_mempool_populate_virt_v20_0_1(mp, addr, len, pg_sz,
> +                                               free_cb, opaque);
> +
> +       if (ret == 0)
> +               ret = -EINVAL;
> +
> +       return ret;
> +}
> +VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
>
>  /* Get the minimal page size used in a mempool before populating it. */
>  int
> @@ -601,6 +651,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>                                 mz->len, pg_sz,
>                                 rte_mempool_memchunk_mz_free,
>                                 (void *)(uintptr_t)mz);
> +               if (ret == 0) /* should not happen */
> +                       ret = -ENOBUFS;
>                 if (ret < 0) {
>                         rte_memzone_free(mz);
>                         goto fail;
> @@ -692,6 +744,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
>
>         ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
>                 rte_mempool_memchunk_anon_free, addr);
> +       if (ret == 0) /* should not happen */
> +               ret = -ENOBUFS;
>         if (ret < 0) {
>                 rte_errno = -ret;
>                 goto fail;
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 0a1dc6059..5a106a432 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1106,9 +1106,12 @@ rte_mempool_free(struct rte_mempool *mp);
>   * @param opaque
>   *   An opaque argument passed to free_cb.
>   * @return
> - *   The number of objects added on success.
> + *   The number of objects added on success (strictly positive).
>   *   On error, the chunk is not added in the memory list of the
> - *   mempool and a negative errno is returned.
> + *   mempool the following code is returned:
> + *     (0): not enough room in chunk for one object.
> + *     (-ENOSPC): mempool is already populated.
> + *     (-ENOMEM): allocation failure.
>   */
>  int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>         rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> @@ -1133,9 +1136,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>   * @param opaque
>   *   An opaque argument passed to free_cb.
>   * @return
> - *   The number of objects added on success.
> + *   The number of objects added on success (strictly positive).
>   *   On error, the chunk is not added in the memory list of the
> - *   mempool and a negative errno is returned.
> + *   mempool the following code is returned:
> + *     (0): not enough room in chunk for one object.
> + *     (-ENOSPC): mempool is already populated.
> + *     (-ENOMEM): allocation failure.
>   */
>  int
>  rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index d002dfc46..34d977e47 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -35,6 +35,13 @@ DPDK_20.0 {
>         local: *;
>  };
>
> +DPDK_20.0.1 {
> +       global:
> +
> +       rte_mempool_populate_iova;
> +       rte_mempool_populate_virt;
> +} DPDK_20.0;
> +
>  EXPERIMENTAL {
>         global:
>
> --
> 2.20.1
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
@ 2020-01-17 20:32     ` David Marchand
  0 siblings, 0 replies; 14+ messages in thread
From: David Marchand @ 2020-01-17 20:32 UTC (permalink / raw)
  To: Olivier Matz
  Cc: dev, Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
	Bruce Richardson, dpdk stable

On Fri, Jan 17, 2020 at 3:58 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> Starting from v20.05, rte_mempool_populate_iova() will return
> -ENOBUFS. The ABI will be preserved through symbol versioning until

This deprecation notice describes a 0 return value, not -ENOBUFS.



> 20.11.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index afa94b43e..e11c7f436 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -86,3 +86,8 @@ Deprecation Notices
>    to set new power environment if power environment was already initialized.
>    In this case the function will return -1 unless the environment is unset first
>    (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> +
> +* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
> +  and rte_mempool_populate_virt() will change to return 0 instead
> +  of -EINVAL when there is not enough room to store one object. The ABI
> +  will be preserved until 20.11.
> --
> 2.20.1
>

-- 
David Marchand


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-stable] [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks
  2020-01-17 14:57 ` [dpdk-stable] [PATCH v2 0/3] " Olivier Matz
                     ` (2 preceding siblings ...)
  2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
@ 2020-01-20 12:02   ` Thomas Monjalon
  3 siblings, 0 replies; 14+ messages in thread
From: Thomas Monjalon @ 2020-01-20 12:02 UTC (permalink / raw)
  To: Olivier Matz
  Cc: dev, Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
	Bruce Richardson, David Marchand, dpdk stable

17/01/2020 15:57, Olivier Matz:
> rte_mempool_populate_virt() sometimes fail, when it calls
> rte_mempool_populate_iova() with an area which is too small to store one
> object. This should not be an error.
> 
> I prepared a v2 which implements an ABI compatibility through symbol
> versioning, as suggested [1]. It looks a bit overkill to me, but it
> was an interresting exercice.
[...]
> Olivier Matz (3):
>   mempool: fix mempool virt populate with small chunks
>   doc: announce API change for mempool IOVA populate
>   mempool: return 0 if area is too small on populate

Applied patches 1 & 2 (with commit log fixed)
Patch 3 is for 20.05.



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-01-20 12:02 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-09 13:27 [dpdk-stable] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
2020-01-09 13:40 ` [dpdk-stable] [dpdk-dev] " David Marchand
2020-01-09 13:46   ` Olivier Matz
2020-01-09 13:52 ` [dpdk-stable] " Burakov, Anatoly
2020-01-09 14:23   ` Olivier Matz
2020-01-09 14:29     ` Burakov, Anatoly
2020-01-09 14:58       ` [dpdk-stable] [dpdk-dev] " Bruce Richardson
2020-01-17 14:57 ` [dpdk-stable] [PATCH v2 0/3] " Olivier Matz
2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 1/3] " Olivier Matz
2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
2020-01-17 20:32     ` David Marchand
2020-01-17 14:57   ` [dpdk-stable] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
2020-01-17 20:32     ` David Marchand
2020-01-20 12:02   ` [dpdk-stable] [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).