From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 70E921B19; Wed, 12 Sep 2018 03:54:47 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14C2B7A9; Tue, 11 Sep 2018 18:54:46 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (unknown [10.169.108.156]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0F8513F575; Tue, 11 Sep 2018 18:54:44 -0700 (PDT) From: Phil Yang To: dev@dpdk.org Cc: bernard.iremonger@intel.com, gavin.hu@arm.com, stable@dpdk.org, phil.yang@arm.com Date: Wed, 12 Sep 2018 09:54:26 +0800 Message-Id: <1536717266-6363-1-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1535362398-6526-1-git-send-email-phil.yang@arm.com> References: <1535362398-6526-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-stable] [PATCH v2] app/testpmd: optimize membuf pool allocation X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Sep 2018 01:54:47 -0000 By default, testpmd will create membuf pool for all NUMA nodes and ignore EAL configuration. Count the number of available NUMA according to EAL core mask or core list configuration. Optimized by only creating membuf pool for those nodes. Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id") Signed-off-by: Phil Yang Acked-by: Gavin Hu --- app/test-pmd/testpmd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index ee48db2..a56af2b 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void) nb_lc = 0; for (i = 0; i < RTE_MAX_LCORE; i++) { + if (!rte_lcore_is_enabled(i)) + continue; sock_num = rte_lcore_to_socket_id(i); if (new_socket_id(sock_num)) { if (num_sockets >= RTE_MAX_NUMA_NODES) { @@ -485,8 +487,6 @@ set_default_fwd_lcores_config(void) } socket_ids[num_sockets++] = sock_num; } - if (!rte_lcore_is_enabled(i)) - continue; if (i == rte_get_master_lcore()) continue; fwd_lcores_cpuids[nb_lc++] = i; -- 2.7.4