* [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
@ 2020-01-09 13:27 Olivier Matz
2020-01-09 13:40 ` David Marchand
` (2 more replies)
0 siblings, 3 replies; 22+ messages in thread
From: Olivier Matz @ 2020-01-09 13:27 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Anatoly Burakov, stable
To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().
This case should not throw an error in
rte_mempool_populate_virt(). Instead, the area that is too small should
just be ignored.
To fix this issue, change the return value of
rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
so it can be ignored by the caller. As this would be an API change, add
a compat wrapper to keep the current API unchanged. The wrapper will be
removed for 20.11.
Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
Is there a simple way to ensure that we won't forget to remove the
wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
it's not clear to me how.
doc/guides/rel_notes/deprecation.rst | 4 ++++
lib/librte_mempool/rte_mempool.c | 28 +++++++++++++++++++++++-----
lib/librte_mempool/rte_mempool.h | 5 ++++-
3 files changed, 31 insertions(+), 6 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index afa94b43e..b6e89d9a2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -86,3 +86,7 @@ Deprecation Notices
to set new power environment if power environment was already initialized.
In this case the function will return -1 unless the environment is unset first
(using ``rte_power_unset_env``). Other function usage scenarios will not change.
+
+* mempool: starting from v20.11, rte_mempool_populate_iova() will
+ return -ENOBUFS instead of -EINVAL when there is not enough room to
+ store one object.
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 78d8eb941..bda361ce6 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
* zone. Return the number of objects added, or a negative value
* on error.
*/
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+static int
+__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
if (off > len) {
- ret = -EINVAL;
+ ret = -ENOBUFS;
goto fail;
}
@@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
/* not enough room to store one object */
if (i == 0) {
- ret = -EINVAL;
+ ret = -ENOBUFS;
goto fail;
}
@@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
return ret;
}
+/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
+int
+rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque)
+{
+ int ret;
+
+ ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+ opaque);
+ if (ret == -ENOBUFS)
+ ret = -EINVAL;
+
+ return ret;
+}
+
static rte_iova_t
get_iova(void *addr)
{
@@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
break;
}
- ret = rte_mempool_populate_iova(mp, addr + off, iova,
+ ret = __rte_mempool_populate_iova(mp, addr + off, iova,
phys_len, free_cb, opaque);
+ if (ret == -ENOBUFS)
+ continue;
if (ret < 0)
goto fail;
/* no need to call the free callback for next chunks */
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index f81152af9..c08bb444f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
* @return
* The number of objects added on success.
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool and a negative errno is returned:
+ * (-ENOBUFS): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
--
2.20.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
2020-01-09 13:27 [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
@ 2020-01-09 13:40 ` David Marchand
2020-01-09 13:46 ` Olivier Matz
2020-01-09 13:52 ` Burakov, Anatoly
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 0/3] " Olivier Matz
2 siblings, 1 reply; 22+ messages in thread
From: David Marchand @ 2020-01-09 13:40 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev, Andrew Rybchenko, Anatoly Burakov, dpdk stable
On Thu, Jan 9, 2020 at 2:27 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> To populate a mempool with a virtual area, the mempool code calls
> rte_mempool_populate_iova() for each iova-contiguous area. It happens
> (rarely) that this area is too small to store one object. In this case,
> rte_mempool_populate_iova() returns an error, which is forwarded by
> rte_mempool_populate_virt().
>
> This case should not throw an error in
> rte_mempool_populate_virt(). Instead, the area that is too small should
> just be ignored.
>
> To fix this issue, change the return value of
> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> so it can be ignored by the caller. As this would be an API change, add
> a compat wrapper to keep the current API unchanged. The wrapper will be
> removed for 20.11.
>
> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> Cc: stable@dpdk.org
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>
> Is there a simple way to ensure that we won't forget to remove the
> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> it's not clear to me how.
>
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> lib/librte_mempool/rte_mempool.c | 28 +++++++++++++++++++++++-----
> lib/librte_mempool/rte_mempool.h | 5 ++++-
> 3 files changed, 31 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index afa94b43e..b6e89d9a2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -86,3 +86,7 @@ Deprecation Notices
> to set new power environment if power environment was already initialized.
> In this case the function will return -1 unless the environment is unset first
> (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> +
> +* mempool: starting from v20.11, rte_mempool_populate_iova() will
> + return -ENOBUFS instead of -EINVAL when there is not enough room to
> + store one object.
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 78d8eb941..bda361ce6 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
> * zone. Return the number of objects added, or a negative value
> * on error.
> */
> -int
> -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +static int
> +__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> @@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
>
> if (off > len) {
> - ret = -EINVAL;
> + ret = -ENOBUFS;
> goto fail;
> }
>
> @@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>
> /* not enough room to store one object */
> if (i == 0) {
> - ret = -EINVAL;
> + ret = -ENOBUFS;
> goto fail;
> }
>
> @@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> return ret;
> }
>
> +/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
> +int
> +rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque)
> +{
> + int ret;
> +
> + ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> + opaque);
> + if (ret == -ENOBUFS)
> + ret = -EINVAL;
> +
> + return ret;
> +}
> +
> static rte_iova_t
> get_iova(void *addr)
> {
> @@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> break;
> }
>
> - ret = rte_mempool_populate_iova(mp, addr + off, iova,
> + ret = __rte_mempool_populate_iova(mp, addr + off, iova,
> phys_len, free_cb, opaque);
> + if (ret == -ENOBUFS)
> + continue;
> if (ret < 0)
> goto fail;
> /* no need to call the free callback for next chunks */
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index f81152af9..c08bb444f 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
> * @return
> * The number of objects added on success.
> * On error, the chunk is not added in the memory list of the
> - * mempool and a negative errno is returned.
> + * mempool and a negative errno is returned:
> + * (-ENOBUFS): not enough room in chunk for one object.
> + * (-ENOSPC): mempool is already populated.
> + * (-ENOMEM): allocation failure.
You can't update the doc before this function does return -ENOBUFS.
--
David Marchand
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
2020-01-09 13:40 ` David Marchand
@ 2020-01-09 13:46 ` Olivier Matz
0 siblings, 0 replies; 22+ messages in thread
From: Olivier Matz @ 2020-01-09 13:46 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Andrew Rybchenko, Anatoly Burakov, dpdk stable
On Thu, Jan 09, 2020 at 02:40:06PM +0100, David Marchand wrote:
> On Thu, Jan 9, 2020 at 2:27 PM Olivier Matz <olivier.matz@6wind.com> wrote:
> >
> > To populate a mempool with a virtual area, the mempool code calls
> > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > (rarely) that this area is too small to store one object. In this case,
> > rte_mempool_populate_iova() returns an error, which is forwarded by
> > rte_mempool_populate_virt().
> >
> > This case should not throw an error in
> > rte_mempool_populate_virt(). Instead, the area that is too small should
> > just be ignored.
> >
> > To fix this issue, change the return value of
> > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > so it can be ignored by the caller. As this would be an API change, add
> > a compat wrapper to keep the current API unchanged. The wrapper will be
> > removed for 20.11.
> >
> > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> >
> > Is there a simple way to ensure that we won't forget to remove the
> > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > it's not clear to me how.
> >
> > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > lib/librte_mempool/rte_mempool.c | 28 +++++++++++++++++++++++-----
> > lib/librte_mempool/rte_mempool.h | 5 ++++-
> > 3 files changed, 31 insertions(+), 6 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index afa94b43e..b6e89d9a2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -86,3 +86,7 @@ Deprecation Notices
> > to set new power environment if power environment was already initialized.
> > In this case the function will return -1 unless the environment is unset first
> > (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> > +
> > +* mempool: starting from v20.11, rte_mempool_populate_iova() will
> > + return -ENOBUFS instead of -EINVAL when there is not enough room to
> > + store one object.
> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> > index 78d8eb941..bda361ce6 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
> > * zone. Return the number of objects added, or a negative value
> > * on error.
> > */
> > -int
> > -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > +static int
> > +__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> > void *opaque)
> > {
> > @@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
> >
> > if (off > len) {
> > - ret = -EINVAL;
> > + ret = -ENOBUFS;
> > goto fail;
> > }
> >
> > @@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> >
> > /* not enough room to store one object */
> > if (i == 0) {
> > - ret = -EINVAL;
> > + ret = -ENOBUFS;
> > goto fail;
> > }
> >
> > @@ -356,6 +356,22 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > return ret;
> > }
> >
> > +/* Compat wrapper, to be removed when changing the API is allowed (v20.11). */
> > +int
> > +rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> > + void *opaque)
> > +{
> > + int ret;
> > +
> > + ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> > + opaque);
> > + if (ret == -ENOBUFS)
> > + ret = -EINVAL;
> > +
> > + return ret;
> > +}
> > +
> > static rte_iova_t
> > get_iova(void *addr)
> > {
> > @@ -406,8 +422,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> > break;
> > }
> >
> > - ret = rte_mempool_populate_iova(mp, addr + off, iova,
> > + ret = __rte_mempool_populate_iova(mp, addr + off, iova,
> > phys_len, free_cb, opaque);
> > + if (ret == -ENOBUFS)
> > + continue;
> > if (ret < 0)
> > goto fail;
> > /* no need to call the free callback for next chunks */
> > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > index f81152af9..c08bb444f 100644
> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -1108,7 +1108,10 @@ rte_mempool_free(struct rte_mempool *mp);
> > * @return
> > * The number of objects added on success.
> > * On error, the chunk is not added in the memory list of the
> > - * mempool and a negative errno is returned.
> > + * mempool and a negative errno is returned:
> > + * (-ENOBUFS): not enough room in chunk for one object.
> > + * (-ENOSPC): mempool is already populated.
> > + * (-ENOMEM): allocation failure.
>
> You can't update the doc before this function does return -ENOBUFS.
Indeed :)
It should be EINVAL instead of ENOBUFS above.
...and I should add a reminder to update the doc for 20.11
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
2020-01-09 13:27 [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
2020-01-09 13:40 ` David Marchand
@ 2020-01-09 13:52 ` Burakov, Anatoly
2020-01-09 14:23 ` Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 0/3] " Olivier Matz
2 siblings, 1 reply; 22+ messages in thread
From: Burakov, Anatoly @ 2020-01-09 13:52 UTC (permalink / raw)
To: Olivier Matz, dev; +Cc: Andrew Rybchenko, stable
On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> To populate a mempool with a virtual area, the mempool code calls
> rte_mempool_populate_iova() for each iova-contiguous area. It happens
> (rarely) that this area is too small to store one object. In this case,
> rte_mempool_populate_iova() returns an error, which is forwarded by
> rte_mempool_populate_virt().
>
> This case should not throw an error in
> rte_mempool_populate_virt(). Instead, the area that is too small should
> just be ignored.
>
> To fix this issue, change the return value of
> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> so it can be ignored by the caller. As this would be an API change, add
> a compat wrapper to keep the current API unchanged. The wrapper will be
> removed for 20.11.
>
> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> Cc: stable@dpdk.org
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>
The approach fixes the issue on my end, so
Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
> Is there a simple way to ensure that we won't forget to remove the
> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> it's not clear to me how.
>
Yes, i'd like to do better than "ah shur we won't forget pinky swear".
Can't we do this with ABI versioning? E.g.
rte_populate_iova_v20() ... returns EINVAL
rte_populate_iova_v21() ... returns ENOBUFS
I'm pretty sure, even if it doesn't break, it will still be more likely
to not be forgotten because there's almost a guarantee that someone will
grep for symbol versioning macros across the codebase around 20.11
timeframe.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
2020-01-09 13:52 ` Burakov, Anatoly
@ 2020-01-09 14:23 ` Olivier Matz
2020-01-09 14:29 ` Burakov, Anatoly
0 siblings, 1 reply; 22+ messages in thread
From: Olivier Matz @ 2020-01-09 14:23 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev, Andrew Rybchenko, stable
On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
> On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> > To populate a mempool with a virtual area, the mempool code calls
> > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > (rarely) that this area is too small to store one object. In this case,
> > rte_mempool_populate_iova() returns an error, which is forwarded by
> > rte_mempool_populate_virt().
> >
> > This case should not throw an error in
> > rte_mempool_populate_virt(). Instead, the area that is too small should
> > just be ignored.
> >
> > To fix this issue, change the return value of
> > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > so it can be ignored by the caller. As this would be an API change, add
> > a compat wrapper to keep the current API unchanged. The wrapper will be
> > removed for 20.11.
> >
> > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> >
>
> The approach fixes the issue on my end, so
>
> Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
>
> > Is there a simple way to ensure that we won't forget to remove the
> > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > it's not clear to me how.
> >
>
> Yes, i'd like to do better than "ah shur we won't forget pinky swear".
>
> Can't we do this with ABI versioning? E.g.
>
> rte_populate_iova_v20() ... returns EINVAL
>
> rte_populate_iova_v21() ... returns ENOBUFS
>
> I'm pretty sure, even if it doesn't break, it will still be more likely to
> not be forgotten because there's almost a guarantee that someone will grep
> for symbol versioning macros across the codebase around 20.11 timeframe.
Without using symbol versionning, would this be ok too?
int
rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
int ret;
ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
#if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
if (ret == -ENOBUFS)
ret = -EINVAL;
#endif
return ret;
}
> --
> Thanks,
> Anatoly
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
2020-01-09 14:23 ` Olivier Matz
@ 2020-01-09 14:29 ` Burakov, Anatoly
2020-01-09 14:58 ` Bruce Richardson
0 siblings, 1 reply; 22+ messages in thread
From: Burakov, Anatoly @ 2020-01-09 14:29 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev, Andrew Rybchenko, stable
On 09-Jan-20 2:23 PM, Olivier Matz wrote:
> On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
>> On 09-Jan-20 1:27 PM, Olivier Matz wrote:
>>> To populate a mempool with a virtual area, the mempool code calls
>>> rte_mempool_populate_iova() for each iova-contiguous area. It happens
>>> (rarely) that this area is too small to store one object. In this case,
>>> rte_mempool_populate_iova() returns an error, which is forwarded by
>>> rte_mempool_populate_virt().
>>>
>>> This case should not throw an error in
>>> rte_mempool_populate_virt(). Instead, the area that is too small should
>>> just be ignored.
>>>
>>> To fix this issue, change the return value of
>>> rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
>>> so it can be ignored by the caller. As this would be an API change, add
>>> a compat wrapper to keep the current API unchanged. The wrapper will be
>>> removed for 20.11.
>>>
>>> Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>>> ---
>>>
>>
>> The approach fixes the issue on my end, so
>>
>> Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>
>>> Is there a simple way to ensure that we won't forget to remove the
>>> wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
>>> it's not clear to me how.
>>>
>>
>> Yes, i'd like to do better than "ah shur we won't forget pinky swear".
>>
>> Can't we do this with ABI versioning? E.g.
>>
>> rte_populate_iova_v20() ... returns EINVAL
>>
>> rte_populate_iova_v21() ... returns ENOBUFS
>>
>> I'm pretty sure, even if it doesn't break, it will still be more likely to
>> not be forgotten because there's almost a guarantee that someone will grep
>> for symbol versioning macros across the codebase around 20.11 timeframe.
>
> Without using symbol versionning, would this be ok too?
>
> int
> rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> int ret;
>
> ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
>
> #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
> if (ret == -ENOBUFS)
> ret = -EINVAL;
> #endif
>
> return ret;
> }
>
>
Well it would certainly work :) it just makes it more likely that this
will be missed.
How about, we leave your patch as is, and then you submit another patch
marked for [20.11] and mark it as Deferred straight away? This is
probably the best method to not forget that i can think of, if you're so
averse to symbol versioning :D
>
>> --
>> Thanks,
>> Anatoly
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks
2020-01-09 14:29 ` Burakov, Anatoly
@ 2020-01-09 14:58 ` Bruce Richardson
0 siblings, 0 replies; 22+ messages in thread
From: Bruce Richardson @ 2020-01-09 14:58 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: Olivier Matz, dev, Andrew Rybchenko, stable
On Thu, Jan 09, 2020 at 02:29:08PM +0000, Burakov, Anatoly wrote:
> On 09-Jan-20 2:23 PM, Olivier Matz wrote:
> > On Thu, Jan 09, 2020 at 01:52:41PM +0000, Burakov, Anatoly wrote:
> > > On 09-Jan-20 1:27 PM, Olivier Matz wrote:
> > > > To populate a mempool with a virtual area, the mempool code calls
> > > > rte_mempool_populate_iova() for each iova-contiguous area. It happens
> > > > (rarely) that this area is too small to store one object. In this case,
> > > > rte_mempool_populate_iova() returns an error, which is forwarded by
> > > > rte_mempool_populate_virt().
> > > >
> > > > This case should not throw an error in
> > > > rte_mempool_populate_virt(). Instead, the area that is too small should
> > > > just be ignored.
> > > >
> > > > To fix this issue, change the return value of
> > > > rte_mempool_populate_iova() to -ENOBUFS when no object can be populated,
> > > > so it can be ignored by the caller. As this would be an API change, add
> > > > a compat wrapper to keep the current API unchanged. The wrapper will be
> > > > removed for 20.11.
> > > >
> > > > Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > ---
> > > >
> > >
> > > The approach fixes the issue on my end, so
> > >
> > > Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > >
> > > > Is there a simple way to ensure that we won't forget to remove the
> > > > wrapper for 20.11? Anatoly suggested me to use versioned symbols, but
> > > > it's not clear to me how.
> > > >
> > >
> > > Yes, i'd like to do better than "ah shur we won't forget pinky swear".
> > >
> > > Can't we do this with ABI versioning? E.g.
> > >
> > > rte_populate_iova_v20() ... returns EINVAL
> > >
> > > rte_populate_iova_v21() ... returns ENOBUFS
> > >
> > > I'm pretty sure, even if it doesn't break, it will still be more likely to
> > > not be forgotten because there's almost a guarantee that someone will grep
> > > for symbol versioning macros across the codebase around 20.11 timeframe.
> >
> > Without using symbol versionning, would this be ok too?
> >
> > int
> > rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> > void *opaque)
> > {
> > int ret;
> >
> > ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
> >
> > #if RTE_VERSION < RTE_VERSION_NUM(20, 11, 0, 0)
> > if (ret == -ENOBUFS)
> > ret = -EINVAL;
> > #endif
> >
> > return ret;
> > }
> >
> >
>
> Well it would certainly work :) it just makes it more likely that this will
> be missed.
>
Depends on your definition of "work". The difference vs having the ABI
versioned code is that any application linked against 20.02...20.08 will
have the old behaviour and have to change to use 20.11. Using proper
versioning, any newly linked apps immediately get the old behaviour and the
compatiblity is provided only for those apps linked against 19.11.
Therefore, I think the ABI compatibility approach is still better, and
should not be that bad, given that I think all that needs to be done for
v20 is to call v21 and then convert an -ENOBUFS to -EINVAL as you do here.
__mempool_populate_iova_v20(....)
{
ret = __mempool_populate_iova_v21(...);
return ret == -ENOBUFS ? -EINVAL : ret;
}
Regards,
/Bruce
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks
2020-01-09 13:27 [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
2020-01-09 13:40 ` David Marchand
2020-01-09 13:52 ` Burakov, Anatoly
@ 2020-01-17 14:57 ` Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 1/3] " Olivier Matz
` (3 more replies)
2 siblings, 4 replies; 22+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
To: dev
Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
Bruce Richardson, David Marchand, dpdk stable
rte_mempool_populate_virt() sometimes fail, when it calls
rte_mempool_populate_iova() with an area which is too small to store one
object. This should not be an error.
I prepared a v2 which implements an ABI compatibility through symbol
versioning, as suggested [1]. It looks a bit overkill to me, but it
was an interresting exercice.
v2 changes:
- The initial requirement is to fix an issue at mempool creation. As
the fix will probably be backported in 19.11.x, the first patch
in the patchset does not break API/ABI.
- Using symbol versioning helps to preserve ABI, but for the API
the breakage has to be announced 1 release in advance, so there
is a separate patch for this.
- There is a 3rd patch for 20.05 that makes the new API public and
implements ABI versioning (if ok, I'll remove it from the patchset in
next version and send it separately)
- It appears that returning -ENOBUFS instead of -EINVAL is not ideal
because, in theory, mempool_ops_alloc_once() could also return
-ENOBUFS, and it would be forwarded to the caller by
rte_mempool_populate_iova() too, and misinterpreted as "there is not
enough room".
Returning 0 instead of -ENOBUFS was initially suggested by Anatoly,
and it does not suffer from this problem. It is doable if we properly
document that the memory chunk is not added to the mempool when
returning 0. It has an impact on populate_virt(), which has to be
versioned too.
There are some checkpatch warnings, but I'm not sure how if I
should solve them:
- it complains about forward declaration in .c, but without it, it does
not compile due to additional warnings in cflags
- it complains that modified symbols should be marked as experimental
Thanks to David for helping me to test and fix the ABI part of the
patchset.
[1] http://patchwork.dpdk.org/patch/64369/
Olivier Matz (3):
mempool: fix mempool virt populate with small chunks
doc: announce API change for mempool IOVA populate
mempool: return 0 if area is too small on populate
doc/guides/rel_notes/deprecation.rst | 5 ++
examples/ntb/ntb_fwd.c | 2 +-
lib/librte_mempool/meson.build | 1 +
lib/librte_mempool/rte_mempool.c | 88 ++++++++++++++++++++--
lib/librte_mempool/rte_mempool.h | 14 +++-
lib/librte_mempool/rte_mempool_version.map | 7 ++
6 files changed, 105 insertions(+), 12 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] mempool: fix mempool virt populate with small chunks
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 0/3] " Olivier Matz
@ 2020-01-17 14:57 ` Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
` (2 subsequent siblings)
3 siblings, 0 replies; 22+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
To: dev
Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
Bruce Richardson, David Marchand, dpdk stable
To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().
This case should not throw an error in
rte_mempool_populate_virt(). Instead, the area that is too small should
just be ignored.
To fix this issue, change the return value of
rte_mempool_populate_iova() to 0 when no object can be populated,
so it can be ignored by the caller. As this would be an API/ABI change,
only do this modification internally for now.
Fixes: 354788b60cfd ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Zhang Alvin <alvinx.zhang@intel.com>
---
lib/librte_mempool/rte_mempool.c | 30 +++++++++++++++++++++++++-----
1 file changed, 25 insertions(+), 5 deletions(-)
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index f8d453d21..b434d9f17 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -297,8 +297,8 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
* zone. Return the number of objects added, or a negative value
* on error.
*/
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+static int
+__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -332,7 +332,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
if (off > len) {
- ret = -EINVAL;
+ ret = 0;
goto fail;
}
@@ -343,7 +343,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
/* not enough room to store one object */
if (i == 0) {
- ret = -EINVAL;
+ ret = 0;
goto fail;
}
@@ -356,6 +356,21 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
return ret;
}
+int
+rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque)
+{
+ int ret;
+
+ ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+ opaque);
+ if (ret == 0)
+ ret = -EINVAL;
+
+ return ret;
+}
+
static rte_iova_t
get_iova(void *addr)
{
@@ -406,8 +421,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
break;
}
- ret = rte_mempool_populate_iova(mp, addr + off, iova,
+ ret = __rte_mempool_populate_iova(mp, addr + off, iova,
phys_len, free_cb, opaque);
+ if (ret == 0)
+ continue;
if (ret < 0)
goto fail;
/* no need to call the free callback for next chunks */
@@ -415,6 +432,9 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
cnt += ret;
}
+ if (cnt == 0)
+ return -EINVAL;
+
return cnt;
fail:
--
2.20.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 0/3] " Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 1/3] " Olivier Matz
@ 2020-01-17 14:57 ` Olivier Matz
2020-01-17 20:32 ` David Marchand
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
2020-01-20 12:02 ` [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks Thomas Monjalon
3 siblings, 1 reply; 22+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
To: dev
Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
Bruce Richardson, David Marchand, dpdk stable
Starting from v20.05, rte_mempool_populate_iova() will return
-ENOBUFS. The ABI will be preserved through symbol versioning until
20.11.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index afa94b43e..e11c7f436 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -86,3 +86,8 @@ Deprecation Notices
to set new power environment if power environment was already initialized.
In this case the function will return -1 unless the environment is unset first
(using ``rte_power_unset_env``). Other function usage scenarios will not change.
+
+* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
+ and rte_mempool_populate_virt() will change to return 0 instead
+ of -EINVAL when there is not enough room to store one object. The ABI
+ will be preserved until 20.11.
--
2.20.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 0/3] " Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 1/3] " Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
@ 2020-01-17 14:57 ` Olivier Matz
2020-01-17 20:32 ` David Marchand
2020-04-25 22:23 ` [dpdk-dev] [PATCH v3] " Thomas Monjalon
2020-01-20 12:02 ` [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks Thomas Monjalon
3 siblings, 2 replies; 22+ messages in thread
From: Olivier Matz @ 2020-01-17 14:57 UTC (permalink / raw)
To: dev
Cc: Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
Bruce Richardson, David Marchand, dpdk stable
Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
return 0 instead of -EINVAL when there is not enough room to store one
object, as it can be helpful for applications to distinguish this
specific case.
As this is an ABI change, use symbol versioning to preserve old
behavior for binary applications.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
examples/ntb/ntb_fwd.c | 2 +-
lib/librte_mempool/meson.build | 1 +
lib/librte_mempool/rte_mempool.c | 76 ++++++++++++++++++----
lib/librte_mempool/rte_mempool.h | 14 ++--
lib/librte_mempool/rte_mempool_version.map | 7 ++
5 files changed, 84 insertions(+), 16 deletions(-)
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index c914256dd..99a43688c 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -1313,7 +1313,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
mz->len - ntb_info.ntb_hdr_size,
ntb_mempool_mz_free,
(void *)(uintptr_t)mz);
- if (ret < 0) {
+ if (ret <= 0) {
rte_memzone_free(mz);
rte_mempool_free(mp);
return NULL;
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index f8710b61b..4fe267cd8 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -18,3 +18,4 @@ deps += ['ring']
# memseg walk is not yet part of stable API
allow_experimental_apis = true
+use_function_versioning = true
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b434d9f17..0ebb21eec 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -31,6 +31,7 @@
#include <rte_string_fns.h>
#include <rte_spinlock.h>
#include <rte_tailq.h>
+#include <rte_function_versioning.h>
#include "rte_mempool.h"
@@ -293,12 +294,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
return 0;
}
+__vsym int
+rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Add objects in the pool, using a physically contiguous memory
* zone. Return the number of objects added, or a negative value
* on error.
*/
-static int
-__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+__vsym int
+rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -355,21 +361,34 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_free(memhdr);
return ret;
}
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_1, 20.0.1);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_iova_v20_0_1);
+
+__vsym int
+rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+__vsym int
+rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
int ret;
- ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+ ret = rte_mempool_populate_iova_v20_0_1(mp, vaddr, iova, len, free_cb,
opaque);
if (ret == 0)
ret = -EINVAL;
return ret;
}
+VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
static rte_iova_t
get_iova(void *addr)
@@ -384,11 +403,16 @@ get_iova(void *addr)
return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
}
+__vsym int
+rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Populate the mempool with a virtual area. Return the number of
* objects added, or a negative value on error.
*/
-int
-rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
+__vsym int
+rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -421,7 +445,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
break;
}
- ret = __rte_mempool_populate_iova(mp, addr + off, iova,
+ ret = rte_mempool_populate_iova_v20_0_1(mp, addr + off, iova,
phys_len, free_cb, opaque);
if (ret == 0)
continue;
@@ -432,15 +456,41 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
cnt += ret;
}
- if (cnt == 0)
- return -EINVAL;
-
return cnt;
fail:
rte_mempool_free_memchunks(mp);
return ret;
}
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_1, 20.0.1);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_virt(struct rte_mempool *mp,
+ char *addr, size_t len, size_t pg_sz,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_virt_v20_0_1);
+
+__vsym int
+rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
+__vsym int
+rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque)
+{
+ int ret;
+
+ ret = rte_mempool_populate_virt_v20_0_1(mp, addr, len, pg_sz,
+ free_cb, opaque);
+
+ if (ret == 0)
+ ret = -EINVAL;
+
+ return ret;
+}
+VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
/* Get the minimal page size used in a mempool before populating it. */
int
@@ -601,6 +651,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
mz->len, pg_sz,
rte_mempool_memchunk_mz_free,
(void *)(uintptr_t)mz);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_memzone_free(mz);
goto fail;
@@ -692,6 +744,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
rte_mempool_memchunk_anon_free, addr);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_errno = -ret;
goto fail;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0a1dc6059..5a106a432 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1106,9 +1106,12 @@ rte_mempool_free(struct rte_mempool *mp);
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -1133,9 +1136,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int
rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index d002dfc46..34d977e47 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -35,6 +35,13 @@ DPDK_20.0 {
local: *;
};
+DPDK_20.0.1 {
+ global:
+
+ rte_mempool_populate_iova;
+ rte_mempool_populate_virt;
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
--
2.20.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
@ 2020-01-17 20:32 ` David Marchand
2020-04-25 22:23 ` [dpdk-dev] [PATCH v3] " Thomas Monjalon
1 sibling, 0 replies; 22+ messages in thread
From: David Marchand @ 2020-01-17 20:32 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
Bruce Richardson, dpdk stable
On Fri, Jan 17, 2020 at 3:58 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
> return 0 instead of -EINVAL when there is not enough room to store one
> object, as it can be helpful for applications to distinguish this
> specific case.
>
> As this is an ABI change, use symbol versioning to preserve old
> behavior for binary applications.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> examples/ntb/ntb_fwd.c | 2 +-
> lib/librte_mempool/meson.build | 1 +
> lib/librte_mempool/rte_mempool.c | 76 ++++++++++++++++++----
> lib/librte_mempool/rte_mempool.h | 14 ++--
> lib/librte_mempool/rte_mempool_version.map | 7 ++
> 5 files changed, 84 insertions(+), 16 deletions(-)
>
> diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
> index c914256dd..99a43688c 100644
> --- a/examples/ntb/ntb_fwd.c
> +++ b/examples/ntb/ntb_fwd.c
> @@ -1313,7 +1313,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
> mz->len - ntb_info.ntb_hdr_size,
> ntb_mempool_mz_free,
> (void *)(uintptr_t)mz);
> - if (ret < 0) {
> + if (ret <= 0) {
> rte_memzone_free(mz);
> rte_mempool_free(mp);
> return NULL;
> diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
> index f8710b61b..4fe267cd8 100644
> --- a/lib/librte_mempool/meson.build
> +++ b/lib/librte_mempool/meson.build
> @@ -18,3 +18,4 @@ deps += ['ring']
>
> # memseg walk is not yet part of stable API
> allow_experimental_apis = true
> +use_function_versioning = true
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index b434d9f17..0ebb21eec 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -31,6 +31,7 @@
> #include <rte_string_fns.h>
> #include <rte_spinlock.h>
> #include <rte_tailq.h>
> +#include <rte_function_versioning.h>
>
> #include "rte_mempool.h"
>
> @@ -293,12 +294,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
> return 0;
> }
>
> +__vsym int
> +rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
> + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
> +
For 20.05, this should be 20.0.2.
> /* Add objects in the pool, using a physically contiguous memory
> * zone. Return the number of objects added, or a negative value
> * on error.
> */
> -static int
> -__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +__vsym int
> +rte_mempool_populate_iova_v20_0_1(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> @@ -355,21 +361,34 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> rte_free(memhdr);
> return ret;
> }
> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_1, 20.0.1);
> +MAP_STATIC_SYMBOL(
> + int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> + rte_iova_t iova, size_t len,
> + rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque),
> + rte_mempool_populate_iova_v20_0_1);
> +
> +__vsym int
> +rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
> + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
>
> -int
> -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +__vsym int
> +rte_mempool_populate_iova_v20_0(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> int ret;
>
> - ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> + ret = rte_mempool_populate_iova_v20_0_1(mp, vaddr, iova, len, free_cb,
> opaque);
> if (ret == 0)
> ret = -EINVAL;
>
> return ret;
> }
> +VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
>
> static rte_iova_t
> get_iova(void *addr)
> @@ -384,11 +403,16 @@ get_iova(void *addr)
> return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
> }
>
> +__vsym int
> +rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
> +
> /* Populate the mempool with a virtual area. Return the number of
> * objects added, or a negative value on error.
> */
> -int
> -rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> +__vsym int
> +rte_mempool_populate_virt_v20_0_1(struct rte_mempool *mp, char *addr,
> size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> @@ -421,7 +445,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> break;
> }
>
> - ret = __rte_mempool_populate_iova(mp, addr + off, iova,
> + ret = rte_mempool_populate_iova_v20_0_1(mp, addr + off, iova,
> phys_len, free_cb, opaque);
> if (ret == 0)
> continue;
> @@ -432,15 +456,41 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> cnt += ret;
> }
>
> - if (cnt == 0)
> - return -EINVAL;
> -
> return cnt;
>
> fail:
> rte_mempool_free_memchunks(mp);
> return ret;
> }
> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_1, 20.0.1);
> +MAP_STATIC_SYMBOL(
> + int rte_mempool_populate_virt(struct rte_mempool *mp,
> + char *addr, size_t len, size_t pg_sz,
> + rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque),
> + rte_mempool_populate_virt_v20_0_1);
> +
> +__vsym int
> +rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
> +
> +__vsym int
> +rte_mempool_populate_virt_v20_0(struct rte_mempool *mp, char *addr,
> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque)
> +{
> + int ret;
> +
> + ret = rte_mempool_populate_virt_v20_0_1(mp, addr, len, pg_sz,
> + free_cb, opaque);
> +
> + if (ret == 0)
> + ret = -EINVAL;
> +
> + return ret;
> +}
> +VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
>
> /* Get the minimal page size used in a mempool before populating it. */
> int
> @@ -601,6 +651,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> mz->len, pg_sz,
> rte_mempool_memchunk_mz_free,
> (void *)(uintptr_t)mz);
> + if (ret == 0) /* should not happen */
> + ret = -ENOBUFS;
> if (ret < 0) {
> rte_memzone_free(mz);
> goto fail;
> @@ -692,6 +744,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
>
> ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
> rte_mempool_memchunk_anon_free, addr);
> + if (ret == 0) /* should not happen */
> + ret = -ENOBUFS;
> if (ret < 0) {
> rte_errno = -ret;
> goto fail;
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 0a1dc6059..5a106a432 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1106,9 +1106,12 @@ rte_mempool_free(struct rte_mempool *mp);
> * @param opaque
> * An opaque argument passed to free_cb.
> * @return
> - * The number of objects added on success.
> + * The number of objects added on success (strictly positive).
> * On error, the chunk is not added in the memory list of the
> - * mempool and a negative errno is returned.
> + * mempool the following code is returned:
> + * (0): not enough room in chunk for one object.
> + * (-ENOSPC): mempool is already populated.
> + * (-ENOMEM): allocation failure.
> */
> int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> @@ -1133,9 +1136,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> * @param opaque
> * An opaque argument passed to free_cb.
> * @return
> - * The number of objects added on success.
> + * The number of objects added on success (strictly positive).
> * On error, the chunk is not added in the memory list of the
> - * mempool and a negative errno is returned.
> + * mempool the following code is returned:
> + * (0): not enough room in chunk for one object.
> + * (-ENOSPC): mempool is already populated.
> + * (-ENOMEM): allocation failure.
> */
> int
> rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index d002dfc46..34d977e47 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -35,6 +35,13 @@ DPDK_20.0 {
> local: *;
> };
>
> +DPDK_20.0.1 {
> + global:
> +
> + rte_mempool_populate_iova;
> + rte_mempool_populate_virt;
> +} DPDK_20.0;
> +
> EXPERIMENTAL {
> global:
>
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
@ 2020-01-17 20:32 ` David Marchand
0 siblings, 0 replies; 22+ messages in thread
From: David Marchand @ 2020-01-17 20:32 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
Bruce Richardson, dpdk stable
On Fri, Jan 17, 2020 at 3:58 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> Starting from v20.05, rte_mempool_populate_iova() will return
> -ENOBUFS. The ABI will be preserved through symbol versioning until
This deprecation notice describes a 0 return value, not -ENOBUFS.
> 20.11.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index afa94b43e..e11c7f436 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -86,3 +86,8 @@ Deprecation Notices
> to set new power environment if power environment was already initialized.
> In this case the function will return -1 unless the environment is unset first
> (using ``rte_power_unset_env``). Other function usage scenarios will not change.
> +
> +* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
> + and rte_mempool_populate_virt() will change to return 0 instead
> + of -EINVAL when there is not enough room to store one object. The ABI
> + will be preserved until 20.11.
> --
> 2.20.1
>
--
David Marchand
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 0/3] " Olivier Matz
` (2 preceding siblings ...)
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
@ 2020-01-20 12:02 ` Thomas Monjalon
3 siblings, 0 replies; 22+ messages in thread
From: Thomas Monjalon @ 2020-01-20 12:02 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Zhang, AlvinX, Burakov, Anatoly, Andrew Rybchenko,
Bruce Richardson, David Marchand, dpdk stable
17/01/2020 15:57, Olivier Matz:
> rte_mempool_populate_virt() sometimes fail, when it calls
> rte_mempool_populate_iova() with an area which is too small to store one
> object. This should not be an error.
>
> I prepared a v2 which implements an ABI compatibility through symbol
> versioning, as suggested [1]. It looks a bit overkill to me, but it
> was an interresting exercice.
[...]
> Olivier Matz (3):
> mempool: fix mempool virt populate with small chunks
> doc: announce API change for mempool IOVA populate
> mempool: return 0 if area is too small on populate
Applied patches 1 & 2 (with commit log fixed)
Patch 3 is for 20.05.
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v3] mempool: return 0 if area is too small on populate
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
2020-01-17 20:32 ` David Marchand
@ 2020-04-25 22:23 ` Thomas Monjalon
2020-04-26 16:52 ` [dpdk-dev] [PATCH v4] " Thomas Monjalon
2020-04-27 11:44 ` [dpdk-dev] [PATCH v3] " Ray Kinsella
1 sibling, 2 replies; 22+ messages in thread
From: Thomas Monjalon @ 2020-04-25 22:23 UTC (permalink / raw)
To: dev
Cc: david.marchand, Ray Kinsella, Olivier Matz, Neil Horman,
John McNamara, Marko Kovacevic, Xiaoyun Li, Jingjing Wu,
Andrew Rybchenko
From: Olivier Matz <olivier.matz@6wind.com>
Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
return 0 instead of -EINVAL when there is not enough room to store one
object, as it can be helpful for applications to distinguish this
specific case.
As this is an ABI change, use symbol versioning to preserve old
behavior for binary applications.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
changes in v3:
- rebase
- remove deprecation notice
- notify API change in release notes
- fix ABI version from 20.0.1 to 20.0.2 (should be 21 maybe)
This v3 cannot be merged because of a false positive ABI check:
2 Removed functions:
'function int rte_mempool_populate_iova(rte_mempool*, char*, rte_iova_t, size_t, rte_mempool_memchunk_free_cb_t*, void*)' {rte_mempool_populate_iova@@DPDK_20.0}
'function int rte_mempool_populate_virt(rte_mempool*, char*, size_t, size_t, rte_mempool_memchunk_free_cb_t*, void*)' {rte_mempool_populate_virt@@DPDK_20.0}
---
doc/guides/rel_notes/deprecation.rst | 5 --
doc/guides/rel_notes/release_20_05.rst | 4 ++
examples/ntb/ntb_fwd.c | 2 +-
lib/librte_mempool/meson.build | 2 +
lib/librte_mempool/rte_mempool.c | 77 ++++++++++++++++++----
lib/librte_mempool/rte_mempool.h | 14 ++--
lib/librte_mempool/rte_mempool_version.map | 7 ++
7 files changed, 90 insertions(+), 21 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 1339f54f5f..20aa745b77 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -65,11 +65,6 @@ Deprecation Notices
structure would be made internal (or removed if all dependencies are cleared)
in future releases.
-* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
- and rte_mempool_populate_virt() will change to return 0 instead
- of -EINVAL when there is not enough room to store one object. The ABI
- will be preserved until 20.11.
-
* ethdev: the legacy filter API, including
``rte_eth_dev_filter_supported()``, ``rte_eth_dev_filter_ctrl()`` as well
as filter types MACVLAN, ETHERTYPE, FLEXIBLE, SYN, NTUPLE, TUNNEL, FDIR,
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index b124c3f287..ab20a7d021 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -241,6 +241,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* mempool: The API of ``rte_mempool_populate_iova()`` and
+ ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
+ when there is not enough room to store one object.
+
ABI Changes
-----------
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index d49189e175..eba8ebf9fa 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -1319,7 +1319,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
mz->len - ntb_info.ntb_hdr_size,
ntb_mempool_mz_free,
(void *)(uintptr_t)mz);
- if (ret < 0) {
+ if (ret <= 0) {
rte_memzone_free(mz);
rte_mempool_free(mp);
return NULL;
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index a6e861cbfc..7dbe6b9bea 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -9,6 +9,8 @@ foreach flag: extra_flags
endif
endforeach
+use_function_versioning = true
+
sources = files('rte_mempool.c', 'rte_mempool_ops.c',
'rte_mempool_ops_default.c', 'mempool_trace_points.c')
headers = files('rte_mempool.h', 'rte_mempool_trace.h',
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 0be8f9f59d..edbdafaafb 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -31,6 +31,7 @@
#include <rte_string_fns.h>
#include <rte_spinlock.h>
#include <rte_tailq.h>
+#include <rte_function_versioning.h>
#include "rte_mempool.h"
#include "rte_mempool_trace.h"
@@ -303,12 +304,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
return 0;
}
+__vsym int
+rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Add objects in the pool, using a physically contiguous memory
* zone. Return the number of objects added, or a negative value
* on error.
*/
-static int
-__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+__vsym int
+rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -366,14 +372,27 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
return ret;
}
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_2, 20.0.2);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_iova_v20_0_2);
+
+__vsym int
+rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
+__vsym int
+rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
int ret;
- ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+ ret = rte_mempool_populate_iova_v20_0_2(mp, vaddr, iova, len, free_cb,
opaque);
if (ret == 0)
ret = -EINVAL;
@@ -381,6 +400,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
return ret;
}
+VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
static rte_iova_t
get_iova(void *addr)
@@ -395,11 +415,16 @@ get_iova(void *addr)
return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
}
+__vsym int
+rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Populate the mempool with a virtual area. Return the number of
* objects added, or a negative value on error.
*/
-int
-rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
+__vsym int
+rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -432,7 +457,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
break;
}
- ret = __rte_mempool_populate_iova(mp, addr + off, iova,
+ ret = rte_mempool_populate_iova_v20_0_2(mp, addr + off, iova,
phys_len, free_cb, opaque);
if (ret == 0)
continue;
@@ -443,9 +468,6 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
cnt += ret;
}
- if (cnt == 0)
- return -EINVAL;
-
rte_mempool_trace_populate_virt(mp, addr, len, pg_sz, free_cb, opaque);
return cnt;
@@ -453,6 +475,35 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
rte_mempool_free_memchunks(mp);
return ret;
}
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_2, 20.0.2);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_virt(struct rte_mempool *mp,
+ char *addr, size_t len, size_t pg_sz,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_virt_v20_0_2);
+
+__vsym int
+rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
+__vsym int
+rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque)
+{
+ int ret;
+
+ ret = rte_mempool_populate_virt_v20_0_2(mp, addr, len, pg_sz,
+ free_cb, opaque);
+
+ if (ret == 0)
+ ret = -EINVAL;
+
+ return ret;
+}
+VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
/* Get the minimal page size used in a mempool before populating it. */
int
@@ -609,6 +660,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
mz->len, pg_sz,
rte_mempool_memchunk_mz_free,
(void *)(uintptr_t)mz);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_memzone_free(mz);
goto fail;
@@ -701,6 +754,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
rte_mempool_memchunk_anon_free, addr);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_errno = -ret;
goto fail;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 6e0573ea42..652d19f9f1 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1112,9 +1112,12 @@ rte_mempool_free(struct rte_mempool *mp);
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -1139,9 +1142,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int
rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 695dd6e04f..9e58093665 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -31,6 +31,13 @@ DPDK_20.0 {
local: *;
};
+DPDK_20.0.2 {
+ global:
+
+ rte_mempool_populate_iova;
+ rte_mempool_populate_virt;
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
--
2.26.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v4] mempool: return 0 if area is too small on populate
2020-04-25 22:23 ` [dpdk-dev] [PATCH v3] " Thomas Monjalon
@ 2020-04-26 16:52 ` Thomas Monjalon
2020-05-04 12:49 ` [dpdk-dev] [PATCH v5] " Olivier Matz
2020-04-27 11:44 ` [dpdk-dev] [PATCH v3] " Ray Kinsella
1 sibling, 1 reply; 22+ messages in thread
From: Thomas Monjalon @ 2020-04-26 16:52 UTC (permalink / raw)
To: dev
Cc: david.marchand, Ray Kinsella, Olivier Matz, Neil Horman,
John McNamara, Marko Kovacevic, Xiaoyun Li, Jingjing Wu,
Andrew Rybchenko
From: Olivier Matz <olivier.matz@6wind.com>
Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
return 0 instead of -EINVAL when there is not enough room to store one
object, as it can be helpful for applications to distinguish this
specific case.
As this is an ABI change, use symbol versioning to preserve old
behavior for binary applications.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
We are missing the symbols versioned for the previous ABI:
rte_mempool_populate_iova@@DPDK_20.0
rte_mempool_populate_virt@@DPDK_20.0
changes in v4:
- move populate_iova trace in v20_0_2 base function
changes in v3:
- rebase
- remove deprecation notice
- notify API change in release notes
- fix ABI version from 20.0.1 to 20.0.2 (should be 21 maybe)
---
doc/guides/rel_notes/deprecation.rst | 5 --
doc/guides/rel_notes/release_20_05.rst | 4 ++
examples/ntb/ntb_fwd.c | 2 +-
lib/librte_mempool/meson.build | 2 +
lib/librte_mempool/rte_mempool.c | 80 ++++++++++++++++++----
lib/librte_mempool/rte_mempool.h | 14 ++--
lib/librte_mempool/rte_mempool_version.map | 7 ++
7 files changed, 92 insertions(+), 22 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 1339f54f5f..20aa745b77 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -65,11 +65,6 @@ Deprecation Notices
structure would be made internal (or removed if all dependencies are cleared)
in future releases.
-* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
- and rte_mempool_populate_virt() will change to return 0 instead
- of -EINVAL when there is not enough room to store one object. The ABI
- will be preserved until 20.11.
-
* ethdev: the legacy filter API, including
``rte_eth_dev_filter_supported()``, ``rte_eth_dev_filter_ctrl()`` as well
as filter types MACVLAN, ETHERTYPE, FLEXIBLE, SYN, NTUPLE, TUNNEL, FDIR,
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index b124c3f287..ab20a7d021 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -241,6 +241,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* mempool: The API of ``rte_mempool_populate_iova()`` and
+ ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
+ when there is not enough room to store one object.
+
ABI Changes
-----------
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index d49189e175..eba8ebf9fa 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -1319,7 +1319,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
mz->len - ntb_info.ntb_hdr_size,
ntb_mempool_mz_free,
(void *)(uintptr_t)mz);
- if (ret < 0) {
+ if (ret <= 0) {
rte_memzone_free(mz);
rte_mempool_free(mp);
return NULL;
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index a6e861cbfc..7dbe6b9bea 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -9,6 +9,8 @@ foreach flag: extra_flags
endif
endforeach
+use_function_versioning = true
+
sources = files('rte_mempool.c', 'rte_mempool_ops.c',
'rte_mempool_ops_default.c', 'mempool_trace_points.c')
headers = files('rte_mempool.h', 'rte_mempool_trace.h',
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 0be8f9f59d..0a6119d6ad 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -31,6 +31,7 @@
#include <rte_string_fns.h>
#include <rte_spinlock.h>
#include <rte_tailq.h>
+#include <rte_function_versioning.h>
#include "rte_mempool.h"
#include "rte_mempool_trace.h"
@@ -303,12 +304,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
return 0;
}
+__vsym int
+rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Add objects in the pool, using a physically contiguous memory
* zone. Return the number of objects added, or a negative value
* on error.
*/
-static int
-__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+__vsym int
+rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -359,6 +365,8 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
mp->nb_mem_chunks++;
+
+ rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
return i;
fail:
@@ -366,21 +374,34 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
return ret;
}
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_2, 20.0.2);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_iova_v20_0_2);
+
+__vsym int
+rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
+__vsym int
+rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
int ret;
- ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+ ret = rte_mempool_populate_iova_v20_0_2(mp, vaddr, iova, len, free_cb,
opaque);
if (ret == 0)
ret = -EINVAL;
- rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
return ret;
}
+VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
static rte_iova_t
get_iova(void *addr)
@@ -395,11 +416,16 @@ get_iova(void *addr)
return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
}
+__vsym int
+rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Populate the mempool with a virtual area. Return the number of
* objects added, or a negative value on error.
*/
-int
-rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
+__vsym int
+rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -432,7 +458,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
break;
}
- ret = __rte_mempool_populate_iova(mp, addr + off, iova,
+ ret = rte_mempool_populate_iova_v20_0_2(mp, addr + off, iova,
phys_len, free_cb, opaque);
if (ret == 0)
continue;
@@ -443,9 +469,6 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
cnt += ret;
}
- if (cnt == 0)
- return -EINVAL;
-
rte_mempool_trace_populate_virt(mp, addr, len, pg_sz, free_cb, opaque);
return cnt;
@@ -453,6 +476,35 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
rte_mempool_free_memchunks(mp);
return ret;
}
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_2, 20.0.2);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_virt(struct rte_mempool *mp,
+ char *addr, size_t len, size_t pg_sz,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_virt_v20_0_2);
+
+__vsym int
+rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
+__vsym int
+rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque)
+{
+ int ret;
+
+ ret = rte_mempool_populate_virt_v20_0_2(mp, addr, len, pg_sz,
+ free_cb, opaque);
+
+ if (ret == 0)
+ ret = -EINVAL;
+
+ return ret;
+}
+VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
/* Get the minimal page size used in a mempool before populating it. */
int
@@ -609,6 +661,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
mz->len, pg_sz,
rte_mempool_memchunk_mz_free,
(void *)(uintptr_t)mz);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_memzone_free(mz);
goto fail;
@@ -701,6 +755,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
rte_mempool_memchunk_anon_free, addr);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_errno = -ret;
goto fail;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 6e0573ea42..652d19f9f1 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1112,9 +1112,12 @@ rte_mempool_free(struct rte_mempool *mp);
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -1139,9 +1142,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int
rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 695dd6e04f..9e58093665 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -31,6 +31,13 @@ DPDK_20.0 {
local: *;
};
+DPDK_20.0.2 {
+ global:
+
+ rte_mempool_populate_iova;
+ rte_mempool_populate_virt;
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
--
2.26.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v3] mempool: return 0 if area is too small on populate
2020-04-25 22:23 ` [dpdk-dev] [PATCH v3] " Thomas Monjalon
2020-04-26 16:52 ` [dpdk-dev] [PATCH v4] " Thomas Monjalon
@ 2020-04-27 11:44 ` Ray Kinsella
2020-04-27 18:02 ` Lukasz Wojciechowski
1 sibling, 1 reply; 22+ messages in thread
From: Ray Kinsella @ 2020-04-27 11:44 UTC (permalink / raw)
To: Thomas Monjalon, dev
Cc: david.marchand, Olivier Matz, Neil Horman, John McNamara,
Marko Kovacevic, Xiaoyun Li, Jingjing Wu, Andrew Rybchenko
On 25/04/2020 23:23, Thomas Monjalon wrote:
> From: Olivier Matz <olivier.matz@6wind.com>
>
> Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
> return 0 instead of -EINVAL when there is not enough room to store one
> object, as it can be helpful for applications to distinguish this
> specific case.
>
> As this is an ABI change, use symbol versioning to preserve old
> behavior for binary applications.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>
> changes in v3:
> - rebase
> - remove deprecation notice
> - notify API change in release notes
> - fix ABI version from 20.0.1 to 20.0.2 (should be 21 maybe)
yes, should be v21.
>
> This v3 cannot be merged because of a false positive ABI check:
>
> 2 Removed functions:
> 'function int rte_mempool_populate_iova(rte_mempool*, char*, rte_iova_t, size_t, rte_mempool_memchunk_free_cb_t*, void*)' {rte_mempool_populate_iova@@DPDK_20.0}
> 'function int rte_mempool_populate_virt(rte_mempool*, char*, size_t, size_t, rte_mempool_memchunk_free_cb_t*, void*)' {rte_mempool_populate_virt@@DPDK_20.0}
Well it's not really a false positive, as the v20 symbols are now missing.
See notes on VERSION_SYMBOL.
>
> ---
> doc/guides/rel_notes/deprecation.rst | 5 --
> doc/guides/rel_notes/release_20_05.rst | 4 ++
> examples/ntb/ntb_fwd.c | 2 +-
> lib/librte_mempool/meson.build | 2 +
> lib/librte_mempool/rte_mempool.c | 77 ++++++++++++++++++----
> lib/librte_mempool/rte_mempool.h | 14 ++--
> lib/librte_mempool/rte_mempool_version.map | 7 ++
> 7 files changed, 90 insertions(+), 21 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 1339f54f5f..20aa745b77 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -65,11 +65,6 @@ Deprecation Notices
> structure would be made internal (or removed if all dependencies are cleared)
> in future releases.
>
> -* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
> - and rte_mempool_populate_virt() will change to return 0 instead
> - of -EINVAL when there is not enough room to store one object. The ABI
> - will be preserved until 20.11.
> -
> * ethdev: the legacy filter API, including
> ``rte_eth_dev_filter_supported()``, ``rte_eth_dev_filter_ctrl()`` as well
> as filter types MACVLAN, ETHERTYPE, FLEXIBLE, SYN, NTUPLE, TUNNEL, FDIR,
> diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
> index b124c3f287..ab20a7d021 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -241,6 +241,10 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =========================================================
>
> +* mempool: The API of ``rte_mempool_populate_iova()`` and
> + ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
> + when there is not enough room to store one object.
> +
>
> ABI Changes
> -----------
> diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
> index d49189e175..eba8ebf9fa 100644
> --- a/examples/ntb/ntb_fwd.c
> +++ b/examples/ntb/ntb_fwd.c
> @@ -1319,7 +1319,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
> mz->len - ntb_info.ntb_hdr_size,
> ntb_mempool_mz_free,
> (void *)(uintptr_t)mz);
> - if (ret < 0) {
> + if (ret <= 0) {
> rte_memzone_free(mz);
> rte_mempool_free(mp);
> return NULL;
> diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
> index a6e861cbfc..7dbe6b9bea 100644
> --- a/lib/librte_mempool/meson.build
> +++ b/lib/librte_mempool/meson.build
> @@ -9,6 +9,8 @@ foreach flag: extra_flags
> endif
> endforeach
>
> +use_function_versioning = true
> +
> sources = files('rte_mempool.c', 'rte_mempool_ops.c',
> 'rte_mempool_ops_default.c', 'mempool_trace_points.c')
> headers = files('rte_mempool.h', 'rte_mempool_trace.h',
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 0be8f9f59d..edbdafaafb 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -31,6 +31,7 @@
> #include <rte_string_fns.h>
> #include <rte_spinlock.h>
> #include <rte_tailq.h>
> +#include <rte_function_versioning.h>
>
> #include "rte_mempool.h"
> #include "rte_mempool_trace.h"
> @@ -303,12 +304,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
> return 0;
> }
>
> +__vsym int
> +rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
> + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
> +
> /* Add objects in the pool, using a physically contiguous memory
> * zone. Return the number of objects added, or a negative value
> * on error.
> */
> -static int
> -__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +__vsym int
> +rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> @@ -366,14 +372,27 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> return ret;
> }
>
> -int
> -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_2, 20.0.2);
as discussed v21.
> +MAP_STATIC_SYMBOL(
> + int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> + rte_iova_t iova, size_t len,
> + rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque),
> + rte_mempool_populate_iova_v20_0_2);
> +
> +__vsym int
> +rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
> + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
> +
> +__vsym int
> +rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> int ret;
>
> - ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
> + ret = rte_mempool_populate_iova_v20_0_2(mp, vaddr, iova, len, free_cb,
> opaque);
> if (ret == 0)
> ret = -EINVAL;
> @@ -381,6 +400,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
> return ret;
> }
> +VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
this should be
VERSION_SYMBOL(rte_mempool_populate_iova, _v20, 20.0);
>
> static rte_iova_t
> get_iova(void *addr)
> @@ -395,11 +415,16 @@ get_iova(void *addr)
> return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
> }
>
> +__vsym int
> +rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
> +
> /* Populate the mempool with a virtual area. Return the number of
> * objects added, or a negative value on error.
> */
> -int
> -rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> +__vsym int
> +rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
> size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> void *opaque)
> {
> @@ -432,7 +457,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> break;
> }
>
> - ret = __rte_mempool_populate_iova(mp, addr + off, iova,
> + ret = rte_mempool_populate_iova_v20_0_2(mp, addr + off, iova,
> phys_len, free_cb, opaque);
> if (ret == 0)
> continue;
> @@ -443,9 +468,6 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> cnt += ret;
> }
>
> - if (cnt == 0)
> - return -EINVAL;
> -
> rte_mempool_trace_populate_virt(mp, addr, len, pg_sz, free_cb, opaque);
> return cnt;
>
> @@ -453,6 +475,35 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> rte_mempool_free_memchunks(mp);
> return ret;
> }
> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_2, 20.0.2);
as discussed v21.
> +MAP_STATIC_SYMBOL(
> + int rte_mempool_populate_virt(struct rte_mempool *mp,
> + char *addr, size_t len, size_t pg_sz,
> + rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque),
> + rte_mempool_populate_virt_v20_0_2);
> +
> +__vsym int
> +rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque);
> +
> +__vsym int
> +rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
> + void *opaque)
> +{
> + int ret;
> +
> + ret = rte_mempool_populate_virt_v20_0_2(mp, addr, len, pg_sz,
> + free_cb, opaque);
> +
> + if (ret == 0)
> + ret = -EINVAL;
> +
> + return ret;
> +}
> +VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
this should be
VERSION_SYMBOL(rte_mempool_populate_virt, _v20, 20.0);
>
> /* Get the minimal page size used in a mempool before populating it. */
> int
> @@ -609,6 +660,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> mz->len, pg_sz,
> rte_mempool_memchunk_mz_free,
> (void *)(uintptr_t)mz);
> + if (ret == 0) /* should not happen */
> + ret = -ENOBUFS;
> if (ret < 0) {
> rte_memzone_free(mz);
> goto fail;
> @@ -701,6 +754,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
>
> ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
> rte_mempool_memchunk_anon_free, addr);
> + if (ret == 0) /* should not happen */
> + ret = -ENOBUFS;
> if (ret < 0) {
> rte_errno = -ret;
> goto fail;
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 6e0573ea42..652d19f9f1 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1112,9 +1112,12 @@ rte_mempool_free(struct rte_mempool *mp);
> * @param opaque
> * An opaque argument passed to free_cb.
> * @return
> - * The number of objects added on success.
> + * The number of objects added on success (strictly positive).
> * On error, the chunk is not added in the memory list of the
> - * mempool and a negative errno is returned.
> + * mempool the following code is returned:
> + * (0): not enough room in chunk for one object.
> + * (-ENOSPC): mempool is already populated.
> + * (-ENOMEM): allocation failure.
> */
> int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
> @@ -1139,9 +1142,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> * @param opaque
> * An opaque argument passed to free_cb.
> * @return
> - * The number of objects added on success.
> + * The number of objects added on success (strictly positive).
> * On error, the chunk is not added in the memory list of the
> - * mempool and a negative errno is returned.
> + * mempool the following code is returned:
> + * (0): not enough room in chunk for one object.
> + * (-ENOSPC): mempool is already populated.
> + * (-ENOMEM): allocation failure.
> */
> int
> rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index 695dd6e04f..9e58093665 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -31,6 +31,13 @@ DPDK_20.0 {
> local: *;
> };
>
> +DPDK_20.0.2 {
as discussed DPDK_21
> + global:
> +
> + rte_mempool_populate_iova;
> + rte_mempool_populate_virt;
> +} DPDK_20.0;
> +
> EXPERIMENTAL {
> global:
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v3] mempool: return 0 if area is too small on populate
2020-04-27 11:44 ` [dpdk-dev] [PATCH v3] " Ray Kinsella
@ 2020-04-27 18:02 ` Lukasz Wojciechowski
0 siblings, 0 replies; 22+ messages in thread
From: Lukasz Wojciechowski @ 2020-04-27 18:02 UTC (permalink / raw)
To: Ray Kinsella, Thomas Monjalon, dev
Cc: david.marchand, Olivier Matz, Neil Horman, John McNamara,
Marko Kovacevic, Xiaoyun Li, Jingjing Wu, Andrew Rybchenko
W dniu 27.04.2020 o 13:44, Ray Kinsella pisze:
>
> On 25/04/2020 23:23, Thomas Monjalon wrote:
>> From: Olivier Matz <olivier.matz@6wind.com>
>>
>> Change rte_mempool_populate_iova() and rte_mempool_populate_iova() to
... and rte_mempool_populate_virt()
>> return 0 instead of -EINVAL when there is not enough room to store one
>> object, as it can be helpful for applications to distinguish this
>> specific case.
>>
>> As this is an ABI change, use symbol versioning to preserve old
>> behavior for binary applications.
>>
>> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>> ---
>>
>> changes in v3:
>> - rebase
>> - remove deprecation notice
>> - notify API change in release notes
>> - fix ABI version from 20.0.1 to 20.0.2 (should be 21 maybe)
> yes, should be v21.
>
>> This v3 cannot be merged because of a false positive ABI check:
>>
>> 2 Removed functions:
>> 'function int rte_mempool_populate_iova(rte_mempool*, char*, rte_iova_t, size_t, rte_mempool_memchunk_free_cb_t*, void*)' {rte_mempool_populate_iova@@DPDK_20.0}
>> 'function int rte_mempool_populate_virt(rte_mempool*, char*, size_t, size_t, rte_mempool_memchunk_free_cb_t*, void*)' {rte_mempool_populate_virt@@DPDK_20.0}
> Well it's not really a false positive, as the v20 symbols are now missing.
> See notes on VERSION_SYMBOL.
>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 5 --
>> doc/guides/rel_notes/release_20_05.rst | 4 ++
>> examples/ntb/ntb_fwd.c | 2 +-
>> lib/librte_mempool/meson.build | 2 +
>> lib/librte_mempool/rte_mempool.c | 77 ++++++++++++++++++----
>> lib/librte_mempool/rte_mempool.h | 14 ++--
>> lib/librte_mempool/rte_mempool_version.map | 7 ++
>> 7 files changed, 90 insertions(+), 21 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index 1339f54f5f..20aa745b77 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -65,11 +65,6 @@ Deprecation Notices
>> structure would be made internal (or removed if all dependencies are cleared)
>> in future releases.
>>
>> -* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
>> - and rte_mempool_populate_virt() will change to return 0 instead
>> - of -EINVAL when there is not enough room to store one object. The ABI
>> - will be preserved until 20.11.
>> -
>> * ethdev: the legacy filter API, including
>> ``rte_eth_dev_filter_supported()``, ``rte_eth_dev_filter_ctrl()`` as well
>> as filter types MACVLAN, ETHERTYPE, FLEXIBLE, SYN, NTUPLE, TUNNEL, FDIR,
>> diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
>> index b124c3f287..ab20a7d021 100644
>> --- a/doc/guides/rel_notes/release_20_05.rst
>> +++ b/doc/guides/rel_notes/release_20_05.rst
>> @@ -241,6 +241,10 @@ API Changes
>> Also, make sure to start the actual text at the margin.
>> =========================================================
>>
>> +* mempool: The API of ``rte_mempool_populate_iova()`` and
>> + ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
>> + when there is not enough room to store one object.
>> +
>>
>> ABI Changes
>> -----------
>> diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
>> index d49189e175..eba8ebf9fa 100644
>> --- a/examples/ntb/ntb_fwd.c
>> +++ b/examples/ntb/ntb_fwd.c
>> @@ -1319,7 +1319,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
>> mz->len - ntb_info.ntb_hdr_size,
>> ntb_mempool_mz_free,
>> (void *)(uintptr_t)mz);
>> - if (ret < 0) {
>> + if (ret <= 0) {
>> rte_memzone_free(mz);
>> rte_mempool_free(mp);
>> return NULL;
>> diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
>> index a6e861cbfc..7dbe6b9bea 100644
>> --- a/lib/librte_mempool/meson.build
>> +++ b/lib/librte_mempool/meson.build
>> @@ -9,6 +9,8 @@ foreach flag: extra_flags
>> endif
>> endforeach
>>
>> +use_function_versioning = true
>> +
>> sources = files('rte_mempool.c', 'rte_mempool_ops.c',
>> 'rte_mempool_ops_default.c', 'mempool_trace_points.c')
>> headers = files('rte_mempool.h', 'rte_mempool_trace.h',
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index 0be8f9f59d..edbdafaafb 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -31,6 +31,7 @@
>> #include <rte_string_fns.h>
>> #include <rte_spinlock.h>
>> #include <rte_tailq.h>
>> +#include <rte_function_versioning.h>
>>
>> #include "rte_mempool.h"
>> #include "rte_mempool_trace.h"
>> @@ -303,12 +304,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
>> return 0;
>> }
>>
>> +__vsym int
>> +rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
>> + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>> + void *opaque);
>> +
>> /* Add objects in the pool, using a physically contiguous memory
>> * zone. Return the number of objects added, or a negative value
>> * on error.
>> */
>> -static int
>> -__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>> +__vsym int
>> +rte_mempool_populate_iova_v20_0_2(struct rte_mempool *mp, char *vaddr,
>> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>> void *opaque)
>> {
>> @@ -366,14 +372,27 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>> return ret;
>> }
>>
>> -int
>> -rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v20_0_2, 20.0.2);
> as discussed v21.
>
>> +MAP_STATIC_SYMBOL(
>> + int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>> + rte_iova_t iova, size_t len,
>> + rte_mempool_memchunk_free_cb_t *free_cb,
>> + void *opaque),
>> + rte_mempool_populate_iova_v20_0_2);
>> +
>> +__vsym int
>> +rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
>> + rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>> + void *opaque);
>> +
>> +__vsym int
>> +rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
>> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>> void *opaque)
>> {
>> int ret;
>>
>> - ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
>> + ret = rte_mempool_populate_iova_v20_0_2(mp, vaddr, iova, len, free_cb,
>> opaque);
>> if (ret == 0)
>> ret = -EINVAL;
>> @@ -381,6 +400,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>> rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
>> return ret;
>> }
>> +VERSION_SYMBOL(rte_mempool_populate_iova, _v20_0, 20.0);
> this should be
>
> VERSION_SYMBOL(rte_mempool_populate_iova, _v20, 20.0);
>
>>
>> static rte_iova_t
>> get_iova(void *addr)
>> @@ -395,11 +415,16 @@ get_iova(void *addr)
>> return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
>> }
>>
>> +__vsym int
>> +rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
>> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
>> + void *opaque);
>> +
>> /* Populate the mempool with a virtual area. Return the number of
>> * objects added, or a negative value on error.
>> */
>> -int
>> -rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>> +__vsym int
>> +rte_mempool_populate_virt_v20_0_2(struct rte_mempool *mp, char *addr,
>> size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
>> void *opaque)
>> {
>> @@ -432,7 +457,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>> break;
>> }
>>
>> - ret = __rte_mempool_populate_iova(mp, addr + off, iova,
>> + ret = rte_mempool_populate_iova_v20_0_2(mp, addr + off, iova,
>> phys_len, free_cb, opaque);
>> if (ret == 0)
>> continue;
>> @@ -443,9 +468,6 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>> cnt += ret;
>> }
>>
>> - if (cnt == 0)
>> - return -EINVAL;
>> -
>> rte_mempool_trace_populate_virt(mp, addr, len, pg_sz, free_cb, opaque);
>> return cnt;
>>
>> @@ -453,6 +475,35 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>> rte_mempool_free_memchunks(mp);
>> return ret;
>> }
>> +BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v20_0_2, 20.0.2);
> as discussed v21.
>
>> +MAP_STATIC_SYMBOL(
>> + int rte_mempool_populate_virt(struct rte_mempool *mp,
>> + char *addr, size_t len, size_t pg_sz,
>> + rte_mempool_memchunk_free_cb_t *free_cb,
>> + void *opaque),
>> + rte_mempool_populate_virt_v20_0_2);
>> +
>> +__vsym int
>> +rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
>> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
>> + void *opaque);
>> +
>> +__vsym int
>> +rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
>> + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
>> + void *opaque)
>> +{
>> + int ret;
>> +
>> + ret = rte_mempool_populate_virt_v20_0_2(mp, addr, len, pg_sz,
>> + free_cb, opaque);
>> +
>> + if (ret == 0)
>> + ret = -EINVAL;
>> +
>> + return ret;
>> +}
>> +VERSION_SYMBOL(rte_mempool_populate_virt, _v20_0, 20.0);
> this should be
> VERSION_SYMBOL(rte_mempool_populate_virt, _v20, 20.0);
>
>
>>
>> /* Get the minimal page size used in a mempool before populating it. */
>> int
>> @@ -609,6 +660,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>> mz->len, pg_sz,
>> rte_mempool_memchunk_mz_free,
>> (void *)(uintptr_t)mz);
>> + if (ret == 0) /* should not happen */
>> + ret = -ENOBUFS;
>> if (ret < 0) {
>> rte_memzone_free(mz);
>> goto fail;
>> @@ -701,6 +754,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
>>
>> ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
>> rte_mempool_memchunk_anon_free, addr);
>> + if (ret == 0) /* should not happen */
>> + ret = -ENOBUFS;
>> if (ret < 0) {
>> rte_errno = -ret;
>> goto fail;
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index 6e0573ea42..652d19f9f1 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -1112,9 +1112,12 @@ rte_mempool_free(struct rte_mempool *mp);
>> * @param opaque
>> * An opaque argument passed to free_cb.
>> * @return
>> - * The number of objects added on success.
>> + * The number of objects added on success (strictly positive).
>> * On error, the chunk is not added in the memory list of the
>> - * mempool and a negative errno is returned.
>> + * mempool the following code is returned:
>> + * (0): not enough room in chunk for one object.
>> + * (-ENOSPC): mempool is already populated.
>> + * (-ENOMEM): allocation failure.
>> */
>> int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>> rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
>> @@ -1139,9 +1142,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>> * @param opaque
>> * An opaque argument passed to free_cb.
>> * @return
>> - * The number of objects added on success.
>> + * The number of objects added on success (strictly positive).
>> * On error, the chunk is not added in the memory list of the
>> - * mempool and a negative errno is returned.
>> + * mempool the following code is returned:
>> + * (0): not enough room in chunk for one object.
>> + * (-ENOSPC): mempool is already populated.
>> + * (-ENOMEM): allocation failure.
>> */
>> int
>> rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
>> index 695dd6e04f..9e58093665 100644
>> --- a/lib/librte_mempool/rte_mempool_version.map
>> +++ b/lib/librte_mempool/rte_mempool_version.map
>> @@ -31,6 +31,13 @@ DPDK_20.0 {
>> local: *;
>> };
>>
>> +DPDK_20.0.2 {
> as discussed DPDK_21
>
>> + global:
>> +
>> + rte_mempool_populate_iova;
>> + rte_mempool_populate_virt;
>> +} DPDK_20.0;
>> +
>> EXPERIMENTAL {
>> global:
>>
>>
--
Lukasz Wojciechowski
Principal Software Engineer
Samsung R&D Institute Poland
Samsung Electronics
Office +48 22 377 88 25
l.wojciechow@partner.samsung.com
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v5] mempool: return 0 if area is too small on populate
2020-04-26 16:52 ` [dpdk-dev] [PATCH v4] " Thomas Monjalon
@ 2020-05-04 12:49 ` Olivier Matz
2020-05-04 12:54 ` Andrew Rybchenko
0 siblings, 1 reply; 22+ messages in thread
From: Olivier Matz @ 2020-05-04 12:49 UTC (permalink / raw)
To: thomas
Cc: arybchenko, david.marchand, dev, jingjing.wu, john.mcnamara,
marko.kovacevic, mdr, nhorman, olivier.matz, xiaoyun.li,
Lukasz Wojciechowski
Change rte_mempool_populate_iova() and rte_mempool_populate_virt() to
return 0 instead of -EINVAL when there is not enough room to store one
object, as it can be helpful for applications to distinguish this
specific case.
As this is an ABI change, use symbol versioning to preserve old
behavior for binary applications.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
changes in v5:
- fix version: 20.0.2 -> 21
- fix commit log: populate_iova -> populate_virt
changes in v4:
- move populate_iova trace in v20_0_2 base function
changes in v3:
- rebase
- remove deprecation notice
- notify API change in release notes
- fix ABI version from 20.0.1 to 20.0.2 (should be 21 maybe)
doc/guides/rel_notes/deprecation.rst | 5 --
doc/guides/rel_notes/release_20_05.rst | 4 ++
examples/ntb/ntb_fwd.c | 2 +-
lib/librte_mempool/meson.build | 2 +
lib/librte_mempool/rte_mempool.c | 80 ++++++++++++++++++----
lib/librte_mempool/rte_mempool.h | 14 ++--
lib/librte_mempool/rte_mempool_version.map | 7 ++
7 files changed, 92 insertions(+), 22 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 1339f54f5..20aa745b7 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -65,11 +65,6 @@ Deprecation Notices
structure would be made internal (or removed if all dependencies are cleared)
in future releases.
-* mempool: starting from v20.05, the API of rte_mempool_populate_iova()
- and rte_mempool_populate_virt() will change to return 0 instead
- of -EINVAL when there is not enough room to store one object. The ABI
- will be preserved until 20.11.
-
* ethdev: the legacy filter API, including
``rte_eth_dev_filter_supported()``, ``rte_eth_dev_filter_ctrl()`` as well
as filter types MACVLAN, ETHERTYPE, FLEXIBLE, SYN, NTUPLE, TUNNEL, FDIR,
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index b124c3f28..ab20a7d02 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -241,6 +241,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* mempool: The API of ``rte_mempool_populate_iova()`` and
+ ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
+ when there is not enough room to store one object.
+
ABI Changes
-----------
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index d49189e17..eba8ebf9f 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -1319,7 +1319,7 @@ ntb_mbuf_pool_create(uint16_t mbuf_seg_size, uint32_t nb_mbuf,
mz->len - ntb_info.ntb_hdr_size,
ntb_mempool_mz_free,
(void *)(uintptr_t)mz);
- if (ret < 0) {
+ if (ret <= 0) {
rte_memzone_free(mz);
rte_mempool_free(mp);
return NULL;
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index a6e861cbf..7dbe6b9be 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -9,6 +9,8 @@ foreach flag: extra_flags
endif
endforeach
+use_function_versioning = true
+
sources = files('rte_mempool.c', 'rte_mempool_ops.c',
'rte_mempool_ops_default.c', 'mempool_trace_points.c')
headers = files('rte_mempool.h', 'rte_mempool_trace.h',
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 0be8f9f59..0bde995b5 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -31,6 +31,7 @@
#include <rte_string_fns.h>
#include <rte_spinlock.h>
#include <rte_tailq.h>
+#include <rte_function_versioning.h>
#include "rte_mempool.h"
#include "rte_mempool_trace.h"
@@ -303,12 +304,17 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
return 0;
}
+__vsym int
+rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Add objects in the pool, using a physically contiguous memory
* zone. Return the number of objects added, or a negative value
* on error.
*/
-static int
-__rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+__vsym int
+rte_mempool_populate_iova_v21(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -359,6 +365,8 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
mp->nb_mem_chunks++;
+
+ rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
return i;
fail:
@@ -366,21 +374,34 @@ __rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
return ret;
}
-int
-rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_iova, _v21, 21);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_iova_v21);
+
+__vsym int
+rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
+ rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
+__vsym int
+rte_mempool_populate_iova_v20(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
int ret;
- ret = __rte_mempool_populate_iova(mp, vaddr, iova, len, free_cb,
+ ret = rte_mempool_populate_iova_v21(mp, vaddr, iova, len, free_cb,
opaque);
if (ret == 0)
ret = -EINVAL;
- rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
return ret;
}
+VERSION_SYMBOL(rte_mempool_populate_iova, _v20, 20.0);
static rte_iova_t
get_iova(void *addr)
@@ -395,11 +416,16 @@ get_iova(void *addr)
return ms->iova + RTE_PTR_DIFF(addr, ms->addr);
}
+__vsym int
+rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
/* Populate the mempool with a virtual area. Return the number of
* objects added, or a negative value on error.
*/
-int
-rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
+__vsym int
+rte_mempool_populate_virt_v21(struct rte_mempool *mp, char *addr,
size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
void *opaque)
{
@@ -432,7 +458,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
break;
}
- ret = __rte_mempool_populate_iova(mp, addr + off, iova,
+ ret = rte_mempool_populate_iova_v21(mp, addr + off, iova,
phys_len, free_cb, opaque);
if (ret == 0)
continue;
@@ -443,9 +469,6 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
cnt += ret;
}
- if (cnt == 0)
- return -EINVAL;
-
rte_mempool_trace_populate_virt(mp, addr, len, pg_sz, free_cb, opaque);
return cnt;
@@ -453,6 +476,35 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
rte_mempool_free_memchunks(mp);
return ret;
}
+BIND_DEFAULT_SYMBOL(rte_mempool_populate_virt, _v21, 21);
+MAP_STATIC_SYMBOL(
+ int rte_mempool_populate_virt(struct rte_mempool *mp,
+ char *addr, size_t len, size_t pg_sz,
+ rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque),
+ rte_mempool_populate_virt_v21);
+
+__vsym int
+rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque);
+
+__vsym int
+rte_mempool_populate_virt_v20(struct rte_mempool *mp, char *addr,
+ size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
+ void *opaque)
+{
+ int ret;
+
+ ret = rte_mempool_populate_virt_v21(mp, addr, len, pg_sz,
+ free_cb, opaque);
+
+ if (ret == 0)
+ ret = -EINVAL;
+
+ return ret;
+}
+VERSION_SYMBOL(rte_mempool_populate_virt, _v20, 20.0);
/* Get the minimal page size used in a mempool before populating it. */
int
@@ -609,6 +661,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
mz->len, pg_sz,
rte_mempool_memchunk_mz_free,
(void *)(uintptr_t)mz);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_memzone_free(mz);
goto fail;
@@ -701,6 +755,8 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
rte_mempool_memchunk_anon_free, addr);
+ if (ret == 0) /* should not happen */
+ ret = -ENOBUFS;
if (ret < 0) {
rte_errno = -ret;
goto fail;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 6e0573ea4..652d19f9f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1112,9 +1112,12 @@ rte_mempool_free(struct rte_mempool *mp);
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -1139,9 +1142,12 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
* @param opaque
* An opaque argument passed to free_cb.
* @return
- * The number of objects added on success.
+ * The number of objects added on success (strictly positive).
* On error, the chunk is not added in the memory list of the
- * mempool and a negative errno is returned.
+ * mempool the following code is returned:
+ * (0): not enough room in chunk for one object.
+ * (-ENOSPC): mempool is already populated.
+ * (-ENOMEM): allocation failure.
*/
int
rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 695dd6e04..826a0b882 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -31,6 +31,13 @@ DPDK_20.0 {
local: *;
};
+DPDK_21 {
+ global:
+
+ rte_mempool_populate_iova;
+ rte_mempool_populate_virt;
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
--
2.25.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v5] mempool: return 0 if area is too small on populate
2020-05-04 12:49 ` [dpdk-dev] [PATCH v5] " Olivier Matz
@ 2020-05-04 12:54 ` Andrew Rybchenko
2020-05-04 15:47 ` Lukasz Wojciechowski
0 siblings, 1 reply; 22+ messages in thread
From: Andrew Rybchenko @ 2020-05-04 12:54 UTC (permalink / raw)
To: Olivier Matz, thomas
Cc: david.marchand, dev, jingjing.wu, john.mcnamara, marko.kovacevic,
mdr, nhorman, xiaoyun.li, Lukasz Wojciechowski
On 5/4/20 3:49 PM, Olivier Matz wrote:
> Change rte_mempool_populate_iova() and rte_mempool_populate_virt() to
> return 0 instead of -EINVAL when there is not enough room to store one
> object, as it can be helpful for applications to distinguish this
> specific case.
>
> As this is an ABI change, use symbol versioning to preserve old
> behavior for binary applications.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v5] mempool: return 0 if area is too small on populate
2020-05-04 12:54 ` Andrew Rybchenko
@ 2020-05-04 15:47 ` Lukasz Wojciechowski
2020-05-04 22:30 ` Thomas Monjalon
0 siblings, 1 reply; 22+ messages in thread
From: Lukasz Wojciechowski @ 2020-05-04 15:47 UTC (permalink / raw)
To: Andrew Rybchenko, Olivier Matz, thomas
Cc: david.marchand, dev, jingjing.wu, john.mcnamara, marko.kovacevic,
mdr, nhorman, xiaoyun.li
W dniu 04.05.2020 o 14:54, Andrew Rybchenko pisze:
> On 5/4/20 3:49 PM, Olivier Matz wrote:
>> Change rte_mempool_populate_iova() and rte_mempool_populate_virt() to
>> return 0 instead of -EINVAL when there is not enough room to store one
>> object, as it can be helpful for applications to distinguish this
>> specific case.
>>
>> As this is an ABI change, use symbol versioning to preserve old
>> behavior for binary applications.
>>
>> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
--
Lukasz Wojciechowski
Principal Software Engineer
Samsung R&D Institute Poland
Samsung Electronics
Office +48 22 377 88 25
l.wojciechow@partner.samsung.com
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v5] mempool: return 0 if area is too small on populate
2020-05-04 15:47 ` Lukasz Wojciechowski
@ 2020-05-04 22:30 ` Thomas Monjalon
0 siblings, 0 replies; 22+ messages in thread
From: Thomas Monjalon @ 2020-05-04 22:30 UTC (permalink / raw)
To: Olivier Matz
Cc: Andrew Rybchenko, dev, david.marchand, jingjing.wu,
john.mcnamara, marko.kovacevic, mdr, nhorman, xiaoyun.li,
Lukasz Wojciechowski
04/05/2020 17:47, Lukasz Wojciechowski:
> W dniu 04.05.2020 o 14:54, Andrew Rybchenko pisze:
> > On 5/4/20 3:49 PM, Olivier Matz wrote:
> >> Change rte_mempool_populate_iova() and rte_mempool_populate_virt() to
> >> return 0 instead of -EINVAL when there is not enough room to store one
> >> object, as it can be helpful for applications to distinguish this
> >> specific case.
> >>
> >> As this is an ABI change, use symbol versioning to preserve old
> >> behavior for binary applications.
> >>
> >> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> >> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Applied, thanks
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2020-05-04 22:30 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-09 13:27 [dpdk-dev] [PATCH] mempool: fix mempool virt populate with small chunks Olivier Matz
2020-01-09 13:40 ` David Marchand
2020-01-09 13:46 ` Olivier Matz
2020-01-09 13:52 ` Burakov, Anatoly
2020-01-09 14:23 ` Olivier Matz
2020-01-09 14:29 ` Burakov, Anatoly
2020-01-09 14:58 ` Bruce Richardson
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 0/3] " Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 1/3] " Olivier Matz
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 2/3] doc: announce API change for mempool IOVA populate Olivier Matz
2020-01-17 20:32 ` David Marchand
2020-01-17 14:57 ` [dpdk-dev] [PATCH v2 3/3] [20.05] mempool: return 0 if area is too small on populate Olivier Matz
2020-01-17 20:32 ` David Marchand
2020-04-25 22:23 ` [dpdk-dev] [PATCH v3] " Thomas Monjalon
2020-04-26 16:52 ` [dpdk-dev] [PATCH v4] " Thomas Monjalon
2020-05-04 12:49 ` [dpdk-dev] [PATCH v5] " Olivier Matz
2020-05-04 12:54 ` Andrew Rybchenko
2020-05-04 15:47 ` Lukasz Wojciechowski
2020-05-04 22:30 ` Thomas Monjalon
2020-04-27 11:44 ` [dpdk-dev] [PATCH v3] " Ray Kinsella
2020-04-27 18:02 ` Lukasz Wojciechowski
2020-01-20 12:02 ` [dpdk-dev] [PATCH v2 0/3] mempool: fix mempool virt populate with small chunks Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).