From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 244864CA5 for ; Mon, 27 Aug 2018 11:33:31 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E3E718A; Mon, 27 Aug 2018 02:33:30 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (unknown [10.169.106.147]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 739503F5BC; Mon, 27 Aug 2018 02:33:29 -0700 (PDT) From: Phil Yang To: dev@dpdk.org Cc: nd@arm.com, gavin.hu@arm.com Date: Mon, 27 Aug 2018 17:33:18 +0800 Message-Id: <1535362398-6526-1-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 Subject: [dpdk-dev] [PATCH] app/testpmd: Optimize membuf pool allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Aug 2018 09:33:31 -0000 By default, testpmd will create membuf pool for all NUMA nodes and ignore EAL configuration. Count the number of available NUMA according to EAL core mask or core list configuration. Optimized by only creating membuf pool for those nodes. Fixes: d5aeab6542f ("app/testpmd: fix mempool creation by socket id") Signed-off-by: Phil Yang --- app/test-pmd/testpmd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index ee48db2..a56af2b 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void) nb_lc = 0; for (i = 0; i < RTE_MAX_LCORE; i++) { + if (!rte_lcore_is_enabled(i)) + continue; sock_num = rte_lcore_to_socket_id(i); if (new_socket_id(sock_num)) { if (num_sockets >= RTE_MAX_NUMA_NODES) { @@ -485,8 +487,6 @@ set_default_fwd_lcores_config(void) } socket_ids[num_sockets++] = sock_num; } - if (!rte_lcore_is_enabled(i)) - continue; if (i == rte_get_master_lcore()) continue; fwd_lcores_cpuids[nb_lc++] = i; -- 2.7.4