DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: andrew.rybchenko@oktetlabs.ru, fengchengwen@huawei.com
Cc: dev@dpdk.org, "Morten Brørup" <mb@smartsharesystems.com>
Subject: [PATCH v3] mempool: test performance with larger bursts
Date: Wed, 24 Jan 2024 09:58:00 +0100	[thread overview]
Message-ID: <20240124085800.83509-1-mb@smartsharesystems.com> (raw)
In-Reply-To: <20240121045249.22465-1-mb@smartsharesystems.com>

Bursts of up to 64 or 128 packets are not uncommon, so increase the
maximum tested get and put burst sizes from 32 to 128.
For convenience, also test get and put burst sizes of
RTE_MEMPOOL_CACHE_MAX_SIZE.

Some applications keep more than 512 objects, so increase the maximum
number of kept objects from 512 to 32768, still in jumps of factor four.
This exceeds the typical mempool cache size of 512 objects, so the test
also exercises the mempool driver.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---

v3:
* Increased max number of kept objects to 32768.
* Added get and put burst sizes of RTE_MEMPOOL_CACHE_MAX_SIZE objects.
* Print error if unable to allocate mempool.
* Initialize use_external_cache with each test.
  A previous version of this patch had a bug, where all test runs
  following the first would use external cache. (Chengwen Feng)
v2: Addressed feedback by Chengwen Feng
* Added get and put burst sizes of 64 objects, which is probably also not
  uncommon packet burst size.
* Fixed list of number of kept objects so list remains in jumps of factor
  four.
* Added three derivative test cases, for faster testing.
---
 app/test/test_mempool_perf.c | 42 ++++++++++++++++++++++--------------
 1 file changed, 26 insertions(+), 16 deletions(-)

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index a5a7d43608..481263186a 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -54,9 +54,9 @@
  *
  *    - Bulk size (*n_get_bulk*, *n_put_bulk*)
  *
- *      - Bulk get from 1 to 128
- *      - Bulk put from 1 to 128
- *      - Bulk get and put from 1 to 128, compile time constant
+ *      - Bulk get from 1 to 128, and RTE_MEMPOOL_CACHE_MAX_SIZE
+ *      - Bulk put from 1 to 128, and RTE_MEMPOOL_CACHE_MAX_SIZE
+ *      - Bulk get and put from 1 to 128, and RTE_MEMPOOL_CACHE_MAX_SIZE, compile time constant
  *
  *    - Number of kept objects (*n_keep*)
  *
@@ -65,12 +65,13 @@
  *      - 512
  *      - 2048
  *      - 8192
+ *      - 32768
  */
 
-#define N 65536
+#define N 262144
 #define TIME_S 5
 #define MEMPOOL_ELT_SIZE 2048
-#define MAX_KEEP 8192
+#define MAX_KEEP 32768
 #define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE*2))-1)
 
 /* Number of pointers fitting into one cache line. */
@@ -210,6 +211,9 @@ per_lcore_mempool_test(void *arg)
 			ret = test_loop(mp, cache, n_keep, 64, 64);
 		else if (n_get_bulk == 128)
 			ret = test_loop(mp, cache, n_keep, 128, 128);
+		else if (n_get_bulk == RTE_MEMPOOL_CACHE_MAX_SIZE)
+			ret = test_loop(mp, cache, n_keep,
+					RTE_MEMPOOL_CACHE_MAX_SIZE, RTE_MEMPOOL_CACHE_MAX_SIZE);
 		else
 			ret = -1;
 
@@ -293,11 +297,13 @@ launch_cores(struct rte_mempool *mp, unsigned int cores)
 
 /* for a given number of core, launch all test cases */
 static int
-do_one_mempool_test(struct rte_mempool *mp, unsigned int cores)
+do_one_mempool_test(struct rte_mempool *mp, unsigned int cores, int external_cache)
 {
-	unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128, 0 };
-	unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128, 0 };
-	unsigned int keep_tab[] = { 32, 128, 512, 2048, 8192, 0 };
+	unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128,
+			RTE_MEMPOOL_CACHE_MAX_SIZE, 0 };
+	unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128,
+			RTE_MEMPOOL_CACHE_MAX_SIZE, 0 };
+	unsigned int keep_tab[] = { 32, 128, 512, 2048, 8192, 32768, 0 };
 	unsigned *get_bulk_ptr;
 	unsigned *put_bulk_ptr;
 	unsigned *keep_ptr;
@@ -310,6 +316,7 @@ do_one_mempool_test(struct rte_mempool *mp, unsigned int cores)
 				if (*keep_ptr < *get_bulk_ptr || *keep_ptr < *put_bulk_ptr)
 					continue;
 
+				use_external_cache = external_cache;
 				use_constant_values = 0;
 				n_get_bulk = *get_bulk_ptr;
 				n_put_bulk = *put_bulk_ptr;
@@ -346,8 +353,10 @@ do_all_mempool_perf_tests(unsigned int cores)
 					NULL, NULL,
 					my_obj_init, NULL,
 					SOCKET_ID_ANY, 0);
-	if (mp_nocache == NULL)
+	if (mp_nocache == NULL) {
+		printf("cannot allocate mempool (without cache)\n");
 		goto err;
+	}
 
 	/* create a mempool (with cache) */
 	mp_cache = rte_mempool_create("perf_test_cache", MEMPOOL_SIZE,
@@ -356,8 +365,10 @@ do_all_mempool_perf_tests(unsigned int cores)
 				      NULL, NULL,
 				      my_obj_init, NULL,
 				      SOCKET_ID_ANY, 0);
-	if (mp_cache == NULL)
+	if (mp_cache == NULL) {
+		printf("cannot allocate mempool (with cache)\n");
 		goto err;
+	}
 
 	default_pool_ops = rte_mbuf_best_mempool_ops();
 	/* Create a mempool based on Default handler */
@@ -386,21 +397,20 @@ do_all_mempool_perf_tests(unsigned int cores)
 	rte_mempool_obj_iter(default_pool, my_obj_init, NULL);
 
 	printf("start performance test (without cache)\n");
-	if (do_one_mempool_test(mp_nocache, cores) < 0)
+	if (do_one_mempool_test(mp_nocache, cores, 0) < 0)
 		goto err;
 
 	printf("start performance test for %s (without cache)\n",
 	       default_pool_ops);
-	if (do_one_mempool_test(default_pool, cores) < 0)
+	if (do_one_mempool_test(default_pool, cores, 0) < 0)
 		goto err;
 
 	printf("start performance test (with cache)\n");
-	if (do_one_mempool_test(mp_cache, cores) < 0)
+	if (do_one_mempool_test(mp_cache, cores, 0) < 0)
 		goto err;
 
 	printf("start performance test (with user-owned cache)\n");
-	use_external_cache = 1;
-	if (do_one_mempool_test(mp_nocache, cores) < 0)
+	if (do_one_mempool_test(mp_nocache, cores, 1) < 0)
 		goto err;
 
 	rte_mempool_list_dump(stdout);
-- 
2.17.1


  parent reply	other threads:[~2024-01-24  8:58 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-21  4:52 [PATCH] " Morten Brørup
2024-01-22  7:10 ` fengchengwen
2024-01-22 14:34 ` [PATCH v2] " Morten Brørup
2024-01-24  2:41   ` fengchengwen
2024-01-24  8:58 ` Morten Brørup [this message]
2024-01-24  9:10 ` [PATCH v4] " Morten Brørup
2024-01-24 11:21 ` [PATCH v5] " Morten Brørup
2024-02-18 18:03   ` Thomas Monjalon
2024-02-20 13:49     ` Morten Brørup
2024-02-21 10:22       ` Thomas Monjalon
2024-02-21 10:38         ` Morten Brørup
2024-02-21 10:40           ` Bruce Richardson
2024-02-20 14:01 ` [PATCH v6] " Morten Brørup
2024-03-02 20:04 ` [PATCH v7] " Morten Brørup
2024-04-04  9:26   ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240124085800.83509-1-mb@smartsharesystems.com \
    --to=mb@smartsharesystems.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).