From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D19C6A0A0C; Thu, 1 Jul 2021 08:08:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72C8F40141; Thu, 1 Jul 2021 08:08:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 623D940040 for ; Thu, 1 Jul 2021 08:08:09 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 16165Pub029290 for ; Wed, 30 Jun 2021 23:08:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=VfAmoj/IFCb1ZbmyV1kg9f/0QCXdbcxzKXClBHcbCqQ=; b=N2Hx5TCSj+R2eBlFFyK2hLE/7eyeSiyjExvu/96EciK6p18KUApkv6KenzE+H9jmKcjn 5Shf6ueBkgs5dmJ1lcjSpJnfRoktivJ/fXc6Lcx57qbSgeKCj2AqFmxNRLqHofiFWqcI diWe7zhsgfPdomVZ7GQFi3gCz0WRKIFw9VneCCi+Z713bPTbOm8D7PE6fmk39pq2gfXC bPUZR1FUCFX2Yxk7qJKs4Gb8MBPoVyofm0zxSiyi3iCfXm5Ip3Mpbpxwsota49/Y4XpJ yrYd/mmiYpu20B3hf+DzB0vCcYpSq6cB5v/z6kXoHT/fMAac8vYCqLrfsZCLb1HvtwkK RQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 39gxj0j28k-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Jun 2021 23:08:08 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 30 Jun 2021 23:08:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 30 Jun 2021 23:08:06 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 97C493F70A8; Wed, 30 Jun 2021 23:08:04 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Date: Thu, 1 Jul 2021 11:37:59 +0530 Message-ID: <20210701060800.1096-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210615103149.4194-1-pbhagavatula@marvell.com> References: <20210615103149.4194-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: OI6cgPNO5R22d7lX3cOsZMHMudP-RZx3 X-Proofpoint-ORIG-GUID: OI6cgPNO5R22d7lX3cOsZMHMudP-RZx3 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.790 definitions=2021-07-01_01:2021-06-30, 2021-07-01 signatures=0 Subject: [dpdk-dev] [PATCH v2] app/eventdev: add option to enable per port pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add option to configure unique mempool for each ethernet device port. Can be used with `pipeline_atq` and `pipeline_queue` tests. Signed-off-by: Pavan Nikhilesh --- v2 Changes: - Fix compilation. - Rebase on next-event. app/test-eventdev/evt_common.h | 1 + app/test-eventdev/evt_options.c | 10 +++++ app/test-eventdev/evt_options.h | 1 + app/test-eventdev/test_pipeline_common.c | 52 +++++++++++++++++------- app/test-eventdev/test_pipeline_common.h | 2 +- doc/guides/tools/testeventdev.rst | 8 ++++ 6 files changed, 59 insertions(+), 15 deletions(-) diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h index 0e228258e..28afb114b 100644 --- a/app/test-eventdev/evt_common.h +++ b/app/test-eventdev/evt_common.h @@ -55,6 +55,7 @@ struct evt_options { uint8_t timdev_cnt; uint8_t nb_timer_adptrs; uint8_t timdev_use_burst; + uint8_t per_port_pool; uint8_t sched_type_list[EVT_MAX_STAGES]; uint16_t mbuf_sz; uint16_t wkr_deq_dep; diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c index 061b63e12..b0bcbc6c9 100644 --- a/app/test-eventdev/evt_options.c +++ b/app/test-eventdev/evt_options.c @@ -297,6 +297,13 @@ evt_parse_eth_queues(struct evt_options *opt, const char *arg) return ret; } +static int +evt_parse_per_port_pool(struct evt_options *opt, const char *arg __rte_unused) +{ + opt->per_port_pool = 1; + return 0; +} + static void usage(char *program) { @@ -333,6 +340,7 @@ usage(char *program) "\t--enable_vector : enable event vectorization.\n" "\t--vector_size : Max vector size.\n" "\t--vector_tmo_ns : Max vector timeout in nanoseconds\n" + "\t--per_port_pool : Configure unique pool per ethdev port\n" ); printf("available tests:\n"); evt_test_dump_names(); @@ -408,6 +416,7 @@ static struct option lgopts[] = { { EVT_ENA_VECTOR, 0, 0, 0 }, { EVT_VECTOR_SZ, 1, 0, 0 }, { EVT_VECTOR_TMO, 1, 0, 0 }, + { EVT_PER_PORT_POOL, 0, 0, 0 }, { EVT_HELP, 0, 0, 0 }, { NULL, 0, 0, 0 } }; @@ -446,6 +455,7 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt) { EVT_ENA_VECTOR, evt_parse_ena_vector}, { EVT_VECTOR_SZ, evt_parse_vector_size}, { EVT_VECTOR_TMO, evt_parse_vector_tmo_ns}, + { EVT_PER_PORT_POOL, evt_parse_per_port_pool}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h index 1cea2a3e1..6436200b4 100644 --- a/app/test-eventdev/evt_options.h +++ b/app/test-eventdev/evt_options.h @@ -46,6 +46,7 @@ #define EVT_ENA_VECTOR ("enable_vector") #define EVT_VECTOR_SZ ("vector_size") #define EVT_VECTOR_TMO ("vector_tmo_ns") +#define EVT_PER_PORT_POOL ("per_port_pool") #define EVT_HELP ("help") void evt_options_default(struct evt_options *opt); diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index d5ef90500..6ee530d4c 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -259,9 +259,10 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt) } for (j = 0; j < opt->eth_queues; j++) { - if (rte_eth_rx_queue_setup(i, j, NB_RX_DESC, - rte_socket_id(), &rx_conf, - t->pool) < 0) { + if (rte_eth_rx_queue_setup( + i, j, NB_RX_DESC, rte_socket_id(), &rx_conf, + opt->per_port_pool ? t->pool[i] : + t->pool[0]) < 0) { evt_err("Failed to setup eth port [%d] rx_queue: %d.", i, 0); return -EINVAL; @@ -569,18 +570,35 @@ pipeline_mempool_setup(struct evt_test *test, struct evt_options *opt) if (data_size > opt->mbuf_sz) opt->mbuf_sz = data_size; } + if (opt->per_port_pool) { + char name[RTE_MEMPOOL_NAMESIZE]; + + snprintf(name, RTE_MEMPOOL_NAMESIZE, "%s-%d", + test->name, i); + t->pool[i] = rte_pktmbuf_pool_create( + name, /* mempool name */ + opt->pool_sz, /* number of elements*/ + 0, /* cache size*/ + 0, opt->mbuf_sz, opt->socket_id); /* flags */ + + if (t->pool[i] == NULL) { + evt_err("failed to create mempool %s", name); + return -ENOMEM; + } + } } - t->pool = rte_pktmbuf_pool_create(test->name, /* mempool name */ + if (!opt->per_port_pool) { + t->pool[0] = rte_pktmbuf_pool_create( + test->name, /* mempool name */ opt->pool_sz, /* number of elements*/ - 512, /* cache size*/ - 0, - opt->mbuf_sz, - opt->socket_id); /* flags */ - - if (t->pool == NULL) { - evt_err("failed to create mempool"); - return -ENOMEM; + 0, /* cache size*/ + 0, opt->mbuf_sz, opt->socket_id); /* flags */ + + if (t->pool[0] == NULL) { + evt_err("failed to create mempool"); + return -ENOMEM; + } } return 0; @@ -589,10 +607,16 @@ pipeline_mempool_setup(struct evt_test *test, struct evt_options *opt) void pipeline_mempool_destroy(struct evt_test *test, struct evt_options *opt) { - RTE_SET_USED(opt); struct test_pipeline *t = evt_test_priv(test); + int i; - rte_mempool_free(t->pool); + RTE_SET_USED(opt); + if (opt->per_port_pool) { + RTE_ETH_FOREACH_DEV(i) + rte_mempool_free(t->pool[i]); + } else { + rte_mempool_free(t->pool[0]); + } } int diff --git a/app/test-eventdev/test_pipeline_common.h b/app/test-eventdev/test_pipeline_common.h index 800a90616..d69e2f8a3 100644 --- a/app/test-eventdev/test_pipeline_common.h +++ b/app/test-eventdev/test_pipeline_common.h @@ -47,7 +47,7 @@ struct test_pipeline { enum evt_test_result result; uint32_t nb_flows; uint64_t outstand_pkts; - struct rte_mempool *pool; + struct rte_mempool *pool[RTE_MAX_ETHPORTS]; struct worker_data worker[EVT_MAX_PORTS]; struct evt_options *opt; uint8_t sched_type_list[EVT_MAX_STAGES] __rte_cache_aligned; diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst index f252dc2c1..b81340471 100644 --- a/doc/guides/tools/testeventdev.rst +++ b/doc/guides/tools/testeventdev.rst @@ -178,6 +178,12 @@ The following are the application command-line options: Vector timeout nanoseconds to be configured for the Rx adapter. Only applicable for `pipeline_atq` and `pipeline_queue` tests. +* ``--per_port_pool`` + + Configure unique mempool per ethernet device, the size of each pool + is equal to `pool_sz`. + Only applicable for pipeline_atq` and `pipeline_queue` tests. + Eventdev Tests -------------- @@ -631,6 +637,7 @@ Supported application command line options are following:: --enable_vector --vector_size --vector_tmo_ns + --per_port_pool .. Note:: @@ -734,6 +741,7 @@ Supported application command line options are following:: --enable_vector --vector_size --vector_tmo_ns + --per_port_pool .. Note:: -- 2.17.1