From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id D4F99FE5 for ; Wed, 6 Sep 2017 09:37:22 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2017 00:37:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,483,1498546800"; d="scan'208";a="1011426549" Received: from smonroyx-mobl.ger.corp.intel.com (HELO [10.237.221.28]) ([10.237.221.28]) by orsmga003.jf.intel.com with ESMTP; 06 Sep 2017 00:37:20 -0700 To: Pavan Nikhilesh Bhagavatula , =?UTF-8?B?546L5b+X5YWL?= References: <6DAF063A35010343823807B082E5681F1A72FDB1@mbx05.360buyAD.local> <20170906043516.GB27242@PBHAGAVATULA-LT> Cc: dev@dpdk.org From: Sergio Gonzalez Monroy Message-ID: <2c08c2c7-ae29-1709-c0e5-9ca5252cfa1d@intel.com> Date: Wed, 6 Sep 2017 08:37:19 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <20170906043516.GB27242@PBHAGAVATULA-LT> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] long initialization of rte_eal_hugepage_init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Sep 2017 07:37:23 -0000 On 06/09/2017 05:35, Pavan Nikhilesh Bhagavatula wrote: > On Wed, Sep 06, 2017 at 03:24:52AM +0000, 王志克 wrote: >> Hi All, >> >> I observed that rte_eal_hugepage_init() will take quite long time if there are lots of huge pages. Example I have 500 1G huge pages, and it takes about 2 minutes. That is too long especially for application restart case. >> >> If the application only needs limited huge page while the host have lots of huge pages, the algorithm is not so efficent. Example, we only need 1G memory from each socket. >> > There is a EAL option --socket-mem which can be used to limit the memory > aquired from each socket. > >> What is the proposal from DPDK community? Any solution? >> >> Note I tried version dpdk 16.11. >> >> Br, >> Wang Zhike > -Pavan Since DPDK 17.08+ we use libnuma to first get the amount of pages we need from each socket, then as many more as we can. So you can setup your huge page mount point or cgroups to limit the amount of pages you can get. So basically: 1. setup mount quota or cgroup limit 2. use --socket-mem option to limit amount per socket Note that pre-17.08 we did not have libnuma support so chances are that if you have a low quota/limit and need memory from both sockets it would likely fail to allocate the request. Thanks, Sergio