From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
To: olivier.matz@6wind.com, dev@dpdk.org
Cc: thomas.monjalon@6wind.com, hemant.agrawal@nxp.com,
shreyansh.jain@nxp.com,
Santosh Shukla <santosh.shukla@caviumnetworks.com>
Subject: [dpdk-dev] [PATCH v3 3/3] test/test/mempool_perf: support default mempool autotest
Date: Tue, 18 Apr 2017 14:04:48 +0530 [thread overview]
Message-ID: <20170418083448.24743-3-santosh.shukla@caviumnetworks.com> (raw)
In-Reply-To: <20170418083448.24743-1-santosh.shukla@caviumnetworks.com>
Mempool_perf autotest currently does perf regression for:
* nochache
* cache
Introducing default_pool, mainly targeted for ext-mempool regression
test. Ext-mempool don't need 'cache' modes so only adding test-case
support for 'nocache' mode.
So to run ext-mempool perf regression, user has to set
RTE_MBUF_DEFAULT_MEMPOOL_OPS="<>"
There is chance of duplication ie.. if user sets
RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc" then regression
will happen twice for 'ring_mp_mc'
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
v1--> v2:
- Patch context fix.
test/test/test_mempool_perf.c | 44 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
diff --git a/test/test/test_mempool_perf.c b/test/test/test_mempool_perf.c
index f29718dc4..f663e63db 100644
--- a/test/test/test_mempool_perf.c
+++ b/test/test/test_mempool_perf.c
@@ -314,6 +314,7 @@ test_mempool_perf(void)
struct rte_mempool *mp = NULL;
struct rte_mempool *mp_cache = NULL;
struct rte_mempool *mp_nocache = NULL;
+ struct rte_mempool *default_pool = NULL;
int ret = -1;
rte_atomic32_init(&synchro);
@@ -337,6 +338,34 @@ test_mempool_perf(void)
if (mp_cache == NULL)
goto err;
+ /* Create a mempool based on Default handler */
+ default_pool = rte_mempool_create_empty("default_pool",
+ MEMPOOL_SIZE,
+ MEMPOOL_ELT_SIZE,
+ 0, 0,
+ SOCKET_ID_ANY, 0);
+
+ if (default_pool == NULL) {
+ printf("cannot allocate %s mempool\n",
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+ goto err;
+ }
+
+ if (rte_mempool_set_ops_byname(default_pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL)
+ < 0) {
+ printf("cannot set %s handler\n", RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+ goto err;
+ }
+
+ if (rte_mempool_populate_default(default_pool) < 0) {
+ printf("cannot populate %s mempool\n",
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+ goto err;
+ }
+
+ rte_mempool_obj_iter(default_pool, my_obj_init, NULL);
+
/* performance test with 1, 2 and max cores */
printf("start performance test (without cache)\n");
mp = mp_nocache;
@@ -351,6 +380,20 @@ test_mempool_perf(void)
goto err;
/* performance test with 1, 2 and max cores */
+ printf("start performance test for %s (without cache)\n",
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+ mp = default_pool;
+
+ if (do_one_mempool_test(mp, 1) < 0)
+ goto err;
+
+ if (do_one_mempool_test(mp, 2) < 0)
+ goto err;
+
+ if (do_one_mempool_test(mp, rte_lcore_count()) < 0)
+ goto err;
+
+ /* performance test with 1, 2 and max cores */
printf("start performance test (with cache)\n");
mp = mp_cache;
@@ -384,6 +427,7 @@ test_mempool_perf(void)
err:
rte_mempool_free(mp_cache);
rte_mempool_free(mp_nocache);
+ rte_mempool_free(default_pool);
return ret;
}
--
2.11.0
next prev parent reply other threads:[~2017-04-18 8:35 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-05 8:51 [dpdk-dev] [PATCH 1/2] test/mempool_perf: Free mempool on exit Santosh Shukla
2017-04-05 8:51 ` [dpdk-dev] [PATCH 2/2] test/mempool_perf: support default mempool autotest Santosh Shukla
2017-04-05 10:02 ` Shreyansh Jain
2017-04-05 12:40 ` santosh
2017-04-05 9:57 ` [dpdk-dev] [PATCH 1/2] test/mempool_perf: Free mempool on exit Shreyansh Jain
2017-04-05 12:33 ` santosh
2017-04-06 6:45 ` [dpdk-dev] [PATCH v2 " Santosh Shukla
2017-04-06 6:45 ` [dpdk-dev] [PATCH v2 2/2] test/mempool_perf: support default mempool autotest Santosh Shukla
2017-04-07 15:51 ` [dpdk-dev] [PATCH v2 1/2] test/mempool_perf: Free mempool on exit Olivier Matz
[not found] ` <BLUPR0701MB17140B8FD2D59B1A7835769FEA0E0@BLUPR0701MB1714.namprd07.prod.outlook.com>
[not found] ` <d0ea6fc4-7cbb-8766-616e-097c4e0fbb14@caviumnetworks.com>
2017-04-10 20:09 ` Olivier MATZ
2017-04-18 8:34 ` [dpdk-dev] [PATCH v3 1/3] test/test/mempool_perf: Remove mempool global vars Santosh Shukla
2017-04-18 8:34 ` [dpdk-dev] [PATCH v3 2/3] test/test/mempool_perf: Free mempool on exit Santosh Shukla
2017-04-18 8:34 ` Santosh Shukla [this message]
2017-04-18 13:42 ` [dpdk-dev] [PATCH v3 1/3] test/test/mempool_perf: Remove mempool global vars Olivier MATZ
2017-04-18 14:39 ` santosh
2017-04-18 14:41 ` [dpdk-dev] [PATCH v4 " Santosh Shukla
2017-04-18 14:41 ` [dpdk-dev] [PATCH v4 2/3] test/test/mempool_perf: Free mempool on exit Santosh Shukla
2017-04-18 14:41 ` [dpdk-dev] [PATCH v4 3/3] test/test/mempool_perf: support default mempool autotest Santosh Shukla
2017-04-18 15:31 ` [dpdk-dev] [PATCH v4 1/3] test/test/mempool_perf: Remove mempool global vars Olivier MATZ
2017-04-19 12:48 ` [dpdk-dev] [dpdk-stable] " Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170418083448.24743-3-santosh.shukla@caviumnetworks.com \
--to=santosh.shukla@caviumnetworks.com \
--cc=dev@dpdk.org \
--cc=hemant.agrawal@nxp.com \
--cc=olivier.matz@6wind.com \
--cc=shreyansh.jain@nxp.com \
--cc=thomas.monjalon@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).