DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] mempool: fix slow allocation of large mempools
@ 2020-01-09 13:27 Olivier Matz
  2020-01-09 13:57 ` Burakov, Anatoly
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Olivier Matz @ 2020-01-09 13:27 UTC (permalink / raw)
  To: dev; +Cc: Andrew Rybchenko, Anatoly Burakov, stable

When allocating a mempool which is larger than the largest
available area, it can take a lot of time:

a- the mempool calculate the required memory size, and tries
   to allocate it, it fails
b- then it tries to allocate the largest available area (this
   does not request new huge pages)
c- add this zone to the mempool, this triggers the allocation
   of a mem hdr, which request a new huge page
d- back to a- until mempool is populated or until there is no
   more memory

This can take a lot of time to finally fail (several minutes): in step
a- it takes all available hugepages on the system, then release them
after it fails.

The problem appeared with commit eba11e364614 ("mempool: reduce wasted
space on populate"), because smaller chunks are now allowed. Previously,
it had to be at least one page size, which is not the case in step b-.

To fix this, implement our own way to allocate the largest available
area instead of using the feature from memzone: if an allocation fails,
try to divide the size by 2 and retry. When the requested size falls
below min_chunk_size, stop and return an error.

Fixes: eba11e364614 ("mempool: reduce wasted space on populate")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.c | 29 ++++++++++++-----------------
 1 file changed, 12 insertions(+), 17 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index bda361ce6..03c8d984c 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -481,6 +481,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned mz_id, n;
 	int ret;
 	bool need_iova_contig_obj;
+	size_t max_alloc_size = SIZE_MAX;
 
 	ret = mempool_ops_alloc_once(mp);
 	if (ret != 0)
@@ -560,30 +561,24 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 		if (min_chunk_size == (size_t)mem_size)
 			mz_flags |= RTE_MEMZONE_IOVA_CONTIG;
 
-		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
+		/* Allocate a memzone, retrying with a smaller area on ENOMEM */
+		do {
+			mz = rte_memzone_reserve_aligned(mz_name,
+				RTE_MIN((size_t)mem_size, max_alloc_size),
 				mp->socket_id, mz_flags, align);
 
-		/* don't try reserving with 0 size if we were asked to reserve
-		 * IOVA-contiguous memory.
-		 */
-		if (min_chunk_size < (size_t)mem_size && mz == NULL) {
-			/* not enough memory, retry with the biggest zone we
-			 * have
-			 */
-			mz = rte_memzone_reserve_aligned(mz_name, 0,
-					mp->socket_id, mz_flags, align);
-		}
+			if (mz == NULL && rte_errno != ENOMEM)
+				break;
+
+			max_alloc_size = RTE_MIN(max_alloc_size,
+						(size_t)mem_size) / 2;
+		} while (max_alloc_size >= min_chunk_size);
+
 		if (mz == NULL) {
 			ret = -rte_errno;
 			goto fail;
 		}
 
-		if (mz->len < min_chunk_size) {
-			rte_memzone_free(mz);
-			ret = -ENOMEM;
-			goto fail;
-		}
-
 		if (need_iova_contig_obj)
 			iova = mz->iova;
 		else
-- 
2.20.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-01-20 10:12 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-09 13:27 [dpdk-dev] [PATCH] mempool: fix slow allocation of large mempools Olivier Matz
2020-01-09 13:57 ` Burakov, Anatoly
2020-01-09 16:06 ` Ali Alnubani
2020-01-09 17:27   ` Olivier Matz
2020-01-10  9:53 ` Andrew Rybchenko
2020-01-17  8:45   ` Olivier Matz
2020-01-17  9:51 ` [dpdk-dev] [PATCH v2] " Olivier Matz
2020-01-17 10:01   ` Olivier Matz
2020-01-17 10:09     ` Andrew Rybchenko
2020-01-20 10:12       ` Thomas Monjalon
2020-01-19 12:29   ` Ali Alnubani

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).