DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] mempool: test performance with larger bursts
@ 2024-01-21  4:52 Morten Brørup
  2024-01-22  7:10 ` fengchengwen
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Morten Brørup @ 2024-01-21  4:52 UTC (permalink / raw)
  To: andrew.rybchenko; +Cc: dev, Morten Brørup

Bursts of up to 128 packets are not uncommon, so increase the maximum
tested get and put burst sizes from 32 to 128.

Some applications keep more than 512 objects, so increase the maximum
number of kept objects from 512 to 4096.
This exceeds the typical mempool cache size of 512 objects, so the test
also exercises the mempool driver.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
 app/test/test_mempool_perf.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index 96de347f04..f52106e833 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(c) 2010-2014 Intel Corporation
- * Copyright(c) 2022 SmartShare Systems
+ * Copyright(c) 2022-2024 SmartShare Systems
  */
 
 #include <string.h>
@@ -54,22 +54,24 @@
  *
  *    - Bulk size (*n_get_bulk*, *n_put_bulk*)
  *
- *      - Bulk get from 1 to 32
- *      - Bulk put from 1 to 32
- *      - Bulk get and put from 1 to 32, compile time constant
+ *      - Bulk get from 1 to 128
+ *      - Bulk put from 1 to 128
+ *      - Bulk get and put from 1 to 128, compile time constant
  *
  *    - Number of kept objects (*n_keep*)
  *
  *      - 32
  *      - 128
  *      - 512
+ *      - 1024
+ *      - 4096
  */
 
 #define N 65536
 #define TIME_S 5
 #define MEMPOOL_ELT_SIZE 2048
-#define MAX_KEEP 512
-#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
+#define MAX_KEEP 4096
+#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE*2))-1)
 
 /* Number of pointers fitting into one cache line. */
 #define CACHE_LINE_BURST (RTE_CACHE_LINE_SIZE / sizeof(uintptr_t))
@@ -204,6 +206,8 @@ per_lcore_mempool_test(void *arg)
 					CACHE_LINE_BURST, CACHE_LINE_BURST);
 		else if (n_get_bulk == 32)
 			ret = test_loop(mp, cache, n_keep, 32, 32);
+		else if (n_get_bulk == 128)
+			ret = test_loop(mp, cache, n_keep, 128, 128);
 		else
 			ret = -1;
 
@@ -289,9 +293,9 @@ launch_cores(struct rte_mempool *mp, unsigned int cores)
 static int
 do_one_mempool_test(struct rte_mempool *mp, unsigned int cores)
 {
-	unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 0 };
-	unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 0 };
-	unsigned int keep_tab[] = { 32, 128, 512, 0 };
+	unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 128, 0 };
+	unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 128, 0 };
+	unsigned int keep_tab[] = { 32, 128, 512, 1024, 4096, 0 };
 	unsigned *get_bulk_ptr;
 	unsigned *put_bulk_ptr;
 	unsigned *keep_ptr;
@@ -301,6 +305,9 @@ do_one_mempool_test(struct rte_mempool *mp, unsigned int cores)
 		for (put_bulk_ptr = bulk_tab_put; *put_bulk_ptr; put_bulk_ptr++) {
 			for (keep_ptr = keep_tab; *keep_ptr; keep_ptr++) {
 
+				if (*keep_ptr < *get_bulk_ptr || *keep_ptr < *put_bulk_ptr)
+					continue;
+
 				use_constant_values = 0;
 				n_get_bulk = *get_bulk_ptr;
 				n_put_bulk = *put_bulk_ptr;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 17+ messages in thread
* [RFC] mbuf: performance optimization
@ 2024-01-21  5:32 Morten Brørup
  2024-01-22 14:27 ` [PATCH v2] mempool: test performance with larger bursts Morten Brørup
  0 siblings, 1 reply; 17+ messages in thread
From: Morten Brørup @ 2024-01-21  5:32 UTC (permalink / raw)
  To: dev; +Cc: andrew.rybchenko

What is the largest realistic value of mbuf->priv_size (the size of an mbuf's application private data area) in any use case?

I am wondering if its size could be reduced from 16 to 8 bit. If a max value of 255 isn't enough, then perhaps by knowing that the private data area must be aligned by 8 (RTE_MBUF_PRIV_ALIGN), its value can hold the size divided by 8. That would make the max value nearly 2 KB.

I suppose that reducing mbuf->nb_segs from 16 to 8 bit is realistic, considering that a maximum size IP packet (64 KB) is unlikely to use more than 64 plus some segments. Does anyone know of any use case with more than 255 segments in an mbuf?

These two changes would allow us to move mbuf->priv_size from the second to the first cache line.


Furthermore, mbuf->timesync should be a dynamic field. This, along with the above changes, would give us one more available 32-bit dynamic field.


Med venlig hilsen / Kind regards,
-Morten Brørup


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2024-04-04  9:26 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-21  4:52 [PATCH] mempool: test performance with larger bursts Morten Brørup
2024-01-22  7:10 ` fengchengwen
2024-01-22 14:34 ` [PATCH v2] " Morten Brørup
2024-01-24  2:41   ` fengchengwen
2024-01-24  8:58 ` [PATCH v3] " Morten Brørup
2024-01-24  9:10 ` [PATCH v4] " Morten Brørup
2024-01-24 11:21 ` [PATCH v5] " Morten Brørup
2024-02-18 18:03   ` Thomas Monjalon
2024-02-20 13:49     ` Morten Brørup
2024-02-21 10:22       ` Thomas Monjalon
2024-02-21 10:38         ` Morten Brørup
2024-02-21 10:40           ` Bruce Richardson
2024-02-20 14:01 ` [PATCH v6] " Morten Brørup
2024-03-02 20:04 ` [PATCH v7] " Morten Brørup
2024-04-04  9:26   ` Morten Brørup
2024-01-21  5:32 [RFC] mbuf: performance optimization Morten Brørup
2024-01-22 14:27 ` [PATCH v2] mempool: test performance with larger bursts Morten Brørup
2024-01-22 14:39   ` Morten Brørup

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).