* [dpdk-dev] [PATCH] mempool: fix non-IO flag inference
@ 2021-10-22 7:59 Dmitry Kozlyuk
2021-10-22 21:09 ` [dpdk-dev] [PATCH v2] " Dmitry Kozlyuk
0 siblings, 1 reply; 4+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-22 7:59 UTC (permalink / raw)
To: dev; +Cc: Tal Shnaiderman, Olivier Matz, Andrew Rybchenko
When mempool had been created with RTE_MEMPOOL_F_NO_IOVA_CONTIG flag
but later populated with valid IOVA, RTE_MEMPOOL_F_NON_IO was set,
while it should not be. The unit test did not catch this
because rte_mempool_populate_default() it used was populating
with RTE_BAD_IOVA.
When IOVA mode is IOVA-as-VA, RTE_MEMPOOL_NON_IO flag cannot be
reliably inferred unless RTE_MEMPOOL_F_ON_IOVA_CONTIG is set.
However, rte_mempool_populate_iova() was not checkig for IOVA mode.
Keep setting RTE_MEMPOOL_NON_IO at an empty mempool creation
and add an assert for it in the unit test (remove a separate test case).
Reset the flag using the following logic:
* Never reset if RTE_MEMPOOL_F_ON_IOVA_CONTIG is set.
* Otherwise, always reset if IOVA mode is IOVA-as-VA.
* Otherwise, reset if IOVA is RTE_BAD_IOVA.
The unit test temporarily rewrites IOVA mode in EAL configuration
before populationg mempools. While it theoretically can affect
rte_malloc() inside rte_mempool_populate_iova() if it triggers
a new hugepage mapping, it is unlikely to do so,
because the allocation size is small.
In exchange, the test run with any IOVA mode will check both modes.
Fixes: 11541c5c81dd ("mempool: add non-IO flag")
Reported-by: Tal Shnaiderman <talshn@nvidia.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
---
app/test/test_mempool.c | 88 ++++++++++++++++++++++++---------------
lib/mempool/rte_mempool.c | 7 +++-
2 files changed, 59 insertions(+), 36 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 4f399b461d..ae1692e6aa 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -28,6 +28,7 @@
#include <rte_malloc.h>
#include <rte_mbuf_pool_ops.h>
#include <rte_mbuf.h>
+#include "eal_private.h"
#include "test.h"
@@ -738,36 +739,57 @@ test_mempool_events_safety(void)
} while (0)
static int
-test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
+test_mempool_flag_non_io_with_no_iova_contig_set(enum rte_iova_mode mode)
{
+ void *virt = NULL;
+ rte_iova_t iova;
+ size_t size = MEMPOOL_ELT_SIZE * 16;
struct rte_mempool *mp = NULL;
+ enum rte_iova_mode saved_mode = rte_eal_iova_mode();
int ret;
+ virt = rte_malloc("test_mempool", size, rte_mem_page_size());
+ RTE_TEST_ASSERT_NOT_NULL(virt, "Cannot allocate memory");
+ iova = rte_mem_virt2iova(virt);
+ RTE_TEST_ASSERT_NOT_EQUAL(iova, RTE_BAD_IOVA, "Cannot get IOVA");
mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
MEMPOOL_ELT_SIZE, 0, 0,
SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG);
RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
rte_strerror(rte_errno));
rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
- ret = rte_mempool_populate_default(mp);
+
+ RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
+ "NON_IO flag is not set on an empty mempool");
+
+ /*
+ * Always use valid IOVA so that populate() has no other reason
+ * to infer that the mempool cannot be used for IO.
+ */
+ rte_eal_get_configuration()->iova_mode = mode;
+ ret = rte_mempool_populate_iova(mp, virt, iova, size, NULL, NULL);
RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
rte_strerror(-ret));
RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
"NON_IO flag is not set when NO_IOVA_CONTIG is set");
ret = TEST_SUCCESS;
exit:
+ rte_eal_get_configuration()->iova_mode = saved_mode;
rte_mempool_free(mp);
+ rte_free(virt);
return ret;
}
static int
-test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
+test_mempool_flag_non_io_when_populated(enum rte_iova_mode mode)
{
void *virt = NULL;
rte_iova_t iova;
size_t total_size = MEMPOOL_ELT_SIZE * MEMPOOL_SIZE;
size_t block_size = total_size / 3;
struct rte_mempool *mp = NULL;
+ bool using_pa = mode == RTE_IOVA_PA;
+ enum rte_iova_mode saved_mode = rte_eal_iova_mode();
int ret;
/*
@@ -785,55 +807,51 @@ test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
rte_strerror(rte_errno));
+ RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
+ "NON_IO flag is not set on an empty mempool");
+
+ rte_eal_get_configuration()->iova_mode = mode;
+
ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 1 * block_size),
RTE_BAD_IOVA, block_size, NULL, NULL);
RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
rte_strerror(-ret));
- RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
- "NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
+ if (using_pa)
+ RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
+ "NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
+ else
+ RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
+ "NON_IO flag is set on a mempool with contiguous objects with IOVA-as-VA");
ret = rte_mempool_populate_iova(mp, virt, iova, block_size, NULL, NULL);
RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
rte_strerror(-ret));
- RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
- "NON_IO flag is not unset when mempool is populated with valid IOVA");
+ if (using_pa)
+ RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
+ "NON_IO flag is not unset when mempool is populated with valid IOVA");
+ else
+ RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
+ "NON_IO flag is set when mempool is populated with valid IOVA with IOVA-as-VA");
ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 2 * block_size),
RTE_BAD_IOVA, block_size, NULL, NULL);
RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
rte_strerror(-ret));
- RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
- "NON_IO flag is set even when some objects have valid IOVA");
+ if (using_pa)
+ RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
+ "NON_IO flag is set even when some objects have valid IOVA");
+ else
+ RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
+ "NON_IO flag is set for a mempool with contiguous objects with IOVA-as-VA");
ret = TEST_SUCCESS;
exit:
+ rte_eal_get_configuration()->iova_mode = saved_mode;
rte_mempool_free(mp);
rte_free(virt);
return ret;
}
-static int
-test_mempool_flag_non_io_unset_by_default(void)
-{
- struct rte_mempool *mp;
- int ret;
-
- mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
- MEMPOOL_ELT_SIZE, 0, 0,
- SOCKET_ID_ANY, 0);
- RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
- rte_strerror(rte_errno));
- ret = rte_mempool_populate_default(mp);
- RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
- rte_strerror(-ret));
- RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
- "NON_IO flag is set by default");
- ret = TEST_SUCCESS;
-exit:
- rte_mempool_free(mp);
- return ret;
-}
-
#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
static int
@@ -1022,11 +1040,13 @@ test_mempool(void)
GOTO_ERR(ret, err);
/* test NON_IO flag inference */
- if (test_mempool_flag_non_io_unset_by_default() < 0)
+ if (test_mempool_flag_non_io_with_no_iova_contig_set(RTE_IOVA_PA) < 0)
+ GOTO_ERR(ret, err);
+ if (test_mempool_flag_non_io_with_no_iova_contig_set(RTE_IOVA_VA) < 0)
GOTO_ERR(ret, err);
- if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
+ if (test_mempool_flag_non_io_when_populated(RTE_IOVA_PA) < 0)
GOTO_ERR(ret, err);
- if (test_mempool_flag_non_io_unset_when_populated_with_valid_iova() < 0)
+ if (test_mempool_flag_non_io_when_populated(RTE_IOVA_VA) < 0)
GOTO_ERR(ret, err);
rte_mempool_list_dump(stdout);
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index b75d26c82a..ba81db6f67 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -327,6 +327,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
unsigned i = 0;
size_t off;
struct rte_mempool_memhdr *memhdr;
+ bool always_io, never_io;
int ret;
ret = mempool_ops_alloc_once(mp);
@@ -372,8 +373,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
mp->nb_mem_chunks++;
- /* At least some objects in the pool can now be used for IO. */
- if (iova != RTE_BAD_IOVA)
+ /* Check if at least some objects in the pool are now usable for IO. */
+ never_io = mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG;
+ always_io = rte_eal_iova_mode() == RTE_IOVA_VA;
+ if (!never_io && (always_io || iova != RTE_BAD_IOVA))
mp->flags &= ~RTE_MEMPOOL_F_NON_IO;
/* Report the mempool as ready only when fully populated. */
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH v2] mempool: fix non-IO flag inference
2021-10-22 7:59 [dpdk-dev] [PATCH] mempool: fix non-IO flag inference Dmitry Kozlyuk
@ 2021-10-22 21:09 ` Dmitry Kozlyuk
2021-10-25 13:33 ` Olivier Matz
0 siblings, 1 reply; 4+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-22 21:09 UTC (permalink / raw)
To: dev; +Cc: Olivier Matz, Andrew Rybchenko
When mempool had been created with RTE_MEMPOOL_F_NO_IOVA_CONTIG flag
but later populated with valid IOVA, RTE_MEMPOOL_F_NON_IO was unset,
while it should be kept. The unit test did not catch this
because rte_mempool_populate_default() it used was populating
with RTE_BAD_IOVA.
Keep setting RTE_MEMPOOL_NON_IO at an empty mempool creation
and add an assert for it in the unit test (remove the separate case).
Do not reset the flag if RTE_MEMPOOL_F_ON_IOVA_CONTIG is set.
Fixes: 11541c5c81dd ("mempool: add non-IO flag")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
---
v2: IOVA mode has nothing to do with populating the mempool,
so logic related to it is removed and the unit test simplified.
app/test/test_mempool.c | 45 +++++++++++++++++----------------------
lib/mempool/rte_mempool.c | 4 ++--
2 files changed, 22 insertions(+), 27 deletions(-)
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 4f399b461d..4b0f6b0e7f 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -740,16 +740,31 @@ test_mempool_events_safety(void)
static int
test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
{
+ void *virt = NULL;
+ rte_iova_t iova;
+ size_t size = MEMPOOL_ELT_SIZE * 16;
struct rte_mempool *mp = NULL;
int ret;
+ virt = rte_malloc("test_mempool", size, rte_mem_page_size());
+ RTE_TEST_ASSERT_NOT_NULL(virt, "Cannot allocate memory");
+ iova = rte_mem_virt2iova(virt);
+ RTE_TEST_ASSERT_NOT_EQUAL(iova, RTE_BAD_IOVA, "Cannot get IOVA");
mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
MEMPOOL_ELT_SIZE, 0, 0,
SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG);
RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
rte_strerror(rte_errno));
rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
- ret = rte_mempool_populate_default(mp);
+
+ RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
+ "NON_IO flag is not set on an empty mempool");
+
+ /*
+ * Always use valid IOVA so that populate() has no other reason
+ * to infer that the mempool cannot be used for IO.
+ */
+ ret = rte_mempool_populate_iova(mp, virt, iova, size, NULL, NULL);
RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
rte_strerror(-ret));
RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
@@ -757,6 +772,7 @@ test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
ret = TEST_SUCCESS;
exit:
rte_mempool_free(mp);
+ rte_free(virt);
return ret;
}
@@ -785,6 +801,9 @@ test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
rte_strerror(rte_errno));
+ RTE_TEST_ASSERT(mp->flags & RTE_MEMPOOL_F_NON_IO,
+ "NON_IO flag is not set on an empty mempool");
+
ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 1 * block_size),
RTE_BAD_IOVA, block_size, NULL, NULL);
RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
@@ -812,28 +831,6 @@ test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
return ret;
}
-static int
-test_mempool_flag_non_io_unset_by_default(void)
-{
- struct rte_mempool *mp;
- int ret;
-
- mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
- MEMPOOL_ELT_SIZE, 0, 0,
- SOCKET_ID_ANY, 0);
- RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
- rte_strerror(rte_errno));
- ret = rte_mempool_populate_default(mp);
- RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
- rte_strerror(-ret));
- RTE_TEST_ASSERT(!(mp->flags & RTE_MEMPOOL_F_NON_IO),
- "NON_IO flag is set by default");
- ret = TEST_SUCCESS;
-exit:
- rte_mempool_free(mp);
- return ret;
-}
-
#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
static int
@@ -1022,8 +1019,6 @@ test_mempool(void)
GOTO_ERR(ret, err);
/* test NON_IO flag inference */
- if (test_mempool_flag_non_io_unset_by_default() < 0)
- GOTO_ERR(ret, err);
if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
GOTO_ERR(ret, err);
if (test_mempool_flag_non_io_unset_when_populated_with_valid_iova() < 0)
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index b75d26c82a..f66a561d05 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -372,8 +372,8 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
mp->nb_mem_chunks++;
- /* At least some objects in the pool can now be used for IO. */
- if (iova != RTE_BAD_IOVA)
+ /* Check if at least some objects in the pool are now usable for IO. */
+ if (!(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) && iova != RTE_BAD_IOVA)
mp->flags &= ~RTE_MEMPOOL_F_NON_IO;
/* Report the mempool as ready only when fully populated. */
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH v2] mempool: fix non-IO flag inference
2021-10-22 21:09 ` [dpdk-dev] [PATCH v2] " Dmitry Kozlyuk
@ 2021-10-25 13:33 ` Olivier Matz
2021-10-25 15:01 ` Thomas Monjalon
0 siblings, 1 reply; 4+ messages in thread
From: Olivier Matz @ 2021-10-25 13:33 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Andrew Rybchenko
On Sat, Oct 23, 2021 at 12:09:19AM +0300, Dmitry Kozlyuk wrote:
> When mempool had been created with RTE_MEMPOOL_F_NO_IOVA_CONTIG flag
> but later populated with valid IOVA, RTE_MEMPOOL_F_NON_IO was unset,
> while it should be kept. The unit test did not catch this
> because rte_mempool_populate_default() it used was populating
> with RTE_BAD_IOVA.
>
> Keep setting RTE_MEMPOOL_NON_IO at an empty mempool creation
> and add an assert for it in the unit test (remove the separate case).
> Do not reset the flag if RTE_MEMPOOL_F_ON_IOVA_CONTIG is set.
>
> Fixes: 11541c5c81dd ("mempool: add non-IO flag")
>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Thanks
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH v2] mempool: fix non-IO flag inference
2021-10-25 13:33 ` Olivier Matz
@ 2021-10-25 15:01 ` Thomas Monjalon
0 siblings, 0 replies; 4+ messages in thread
From: Thomas Monjalon @ 2021-10-25 15:01 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Andrew Rybchenko, Olivier Matz
25/10/2021 15:33, Olivier Matz:
> On Sat, Oct 23, 2021 at 12:09:19AM +0300, Dmitry Kozlyuk wrote:
> > When mempool had been created with RTE_MEMPOOL_F_NO_IOVA_CONTIG flag
> > but later populated with valid IOVA, RTE_MEMPOOL_F_NON_IO was unset,
> > while it should be kept. The unit test did not catch this
> > because rte_mempool_populate_default() it used was populating
> > with RTE_BAD_IOVA.
> >
> > Keep setting RTE_MEMPOOL_NON_IO at an empty mempool creation
> > and add an assert for it in the unit test (remove the separate case).
> > Do not reset the flag if RTE_MEMPOOL_F_ON_IOVA_CONTIG is set.
> >
> > Fixes: 11541c5c81dd ("mempool: add non-IO flag")
> >
> > Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
Applied, thanks.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-10-25 15:02 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-22 7:59 [dpdk-dev] [PATCH] mempool: fix non-IO flag inference Dmitry Kozlyuk
2021-10-22 21:09 ` [dpdk-dev] [PATCH v2] " Dmitry Kozlyuk
2021-10-25 13:33 ` Olivier Matz
2021-10-25 15:01 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).