From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id A00417CAA for ; Mon, 4 Sep 2017 07:57:54 +0200 (CEST) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Sep 2017 22:57:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,473,1498546800"; d="scan'208";a="147809982" Received: from unknown (HELO dpdk5.bj.intel.com) ([172.16.182.182]) by fmsmga006.fm.intel.com with ESMTP; 03 Sep 2017 22:57:52 -0700 From: Zhiyong Yang To: dev@dpdk.org Cc: thomas@monjalon.net, ferruh.yigit@intel.com, keith.wiles@intel.com, stephen@networkplumber.org, Zhiyong Yang Date: Mon, 4 Sep 2017 13:57:34 +0800 Message-Id: <20170904055734.21354-5-zhiyong.yang@intel.com> X-Mailer: git-send-email 2.13.3 In-Reply-To: <20170904055734.21354-1-zhiyong.yang@intel.com> References: <20170809084203.17562-1-zhiyong.yang@intel.com> <20170904055734.21354-1-zhiyong.yang@intel.com> Subject: [dpdk-dev] [PATCH v2 4/4] testpmd: add flexibility to mbuf allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Sep 2017 05:57:55 -0000 The currnet mechanisim of allocating mbuf depends on RTE_MAX_ETHPORTS which is hardcoded at the compiling time. Once large numbers of ports are needed by users, it may easily cause the failure of creating mempool due to be lack of enough hugepage memory. The patch introduces the policy to limit the max memory size, and try to allocate a mempool with nb_mbuf_per_pool * nb_ports firstly, if this fails due to be lack of enough hugepages, we will keep halving the size until the allocation succeeds, or reaches the min memory requirement. The policy refers to OvS's solution. Signed-off-by: Zhiyong Yang --- app/test-pmd/testpmd.c | 71 ++++++++++++++++++++++++++++++++++++++------------ app/test-pmd/testpmd.h | 3 +++ 2 files changed, 58 insertions(+), 16 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index ed42a4c08..fb01f28ae 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -482,13 +482,14 @@ set_def_fwd_config(void) /* * Configuration initialisation done once at init time. */ -static void +static int mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf, unsigned int socket_id) { char pool_name[RTE_MEMPOOL_NAMESIZE]; struct rte_mempool *rte_mp = NULL; uint32_t mb_size; + int ret = 0; mb_size = sizeof(struct rte_mbuf) + mbuf_seg_size; mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name)); @@ -531,13 +532,12 @@ mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf, } err: - if (rte_mp == NULL) { - rte_exit(EXIT_FAILURE, - "Creation of mbuf pool for socket %u failed: %s\n", - socket_id, rte_strerror(rte_errno)); - } else if (verbose_level > 0) { + if (rte_mp == NULL) + ret = -1; + else if (verbose_level > 0) rte_mempool_dump(stdout, rte_mp); - } + + return ret; } /* @@ -570,6 +570,8 @@ init_config(void) unsigned int nb_mbuf_per_pool; lcoreid_t lc_id; uint8_t port_per_socket[RTE_MAX_NUMA_NODES]; + uint16_t num_ports; + int32_t ret = -1; memset(port_per_socket,0,RTE_MAX_NUMA_NODES); @@ -632,24 +634,61 @@ init_config(void) if (param_total_num_mbufs) nb_mbuf_per_pool = param_total_num_mbufs; else { + num_ports = RTE_MIN(MAX_MULTIPLIER_POOL, RTE_MAX_ETHPORTS); + num_ports = RTE_MAX(MIN_MULTIPLIER_POOL, num_ports); nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + (nb_lcores * mb_mempool_cache) + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST; - nb_mbuf_per_pool *= RTE_MAX_ETHPORTS; + nb_mbuf_per_pool *= num_ports; } + /* try to allocate a mempool with nb_mbuf_per_pool * num_ports, if + * this fails due to be lack of enough hugepages, we will keep halving + * the number until the allocation succeeds. or reaches the min memory + * requirement. + */ + if (numa_support) { uint8_t i; + unsigned int nb_mbuf = nb_mbuf_per_pool; + uint16_t nb_ports = num_ports; + + for (i = 0; i < num_sockets; i++) { + nb_mbuf_per_pool = nb_mbuf; + num_ports = nb_ports; + do { + ret = mbuf_pool_create(mbuf_data_size, + nb_mbuf_per_pool, + socket_ids[i]); + nb_mbuf_per_pool /= 2; + num_ports /= 2; + } while (ret < 0 && num_ports > MIN_MULTIPLIER_POOL); + + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Creation of socket %u mbuf pool failed: %s\n", + socket_ids[i], + rte_strerror(rte_errno)); + } - for (i = 0; i < num_sockets; i++) - mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, - socket_ids[i]); } else { - if (socket_num == UMA_NO_CONFIG) - mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, 0); - else - mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, - socket_num); + + do { + if (socket_num == UMA_NO_CONFIG) + ret = mbuf_pool_create(mbuf_data_size, + nb_mbuf_per_pool, 0); + else + ret = mbuf_pool_create(mbuf_data_size, + nb_mbuf_per_pool, + socket_num); + nb_mbuf_per_pool /= 2; + num_ports /= 2; + } while (ret < 0 && num_ports > MIN_MULTIPLIER_POOL); + + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Creation of mbuf pool for socket %u failed: %s\n", + socket_num, rte_strerror(rte_errno)); } init_port_config(); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index e00d9eb2d..50cb9c78c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -90,6 +90,9 @@ enum { PORT_TOPOLOGY_LOOP, }; +#define MAX_MULTIPLIER_POOL 64 +#define MIN_MULTIPLIER_POOL 4 + #ifdef RTE_TEST_PMD_RECORD_BURST_STATS /** * The data structure associated with RX and TX packet burst statistics -- 2.13.3