From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id B715E37A4; Wed, 8 Mar 2017 14:46:30 +0100 (CET) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Mar 2017 05:46:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,264,1486454400"; d="scan'208";a="74229500" Received: from smonroyx-mobl.ger.corp.intel.com (HELO [10.237.221.13]) ([10.237.221.13]) by fmsmga006.fm.intel.com with ESMTP; 08 Mar 2017 05:46:27 -0800 To: Ilya Maximets , dev@dpdk.org, David Marchand References: <1487250070-13973-1-git-send-email-i.maximets@samsung.com> <50517d4c-5174-f4b2-e77e-143f7aac2c00@samsung.com> Cc: Heetae Ahn , Yuanhan Liu , Jianfeng Tan , Neil Horman , Yulong Pei , stable@dpdk.org, Thomas Monjalon , Bruce Richardson From: Sergio Gonzalez Monroy Message-ID: Date: Wed, 8 Mar 2017 13:46:26 +0000 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <50517d4c-5174-f4b2-e77e-143f7aac2c00@samsung.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [PATCH] mem: balanced allocation of hugepages X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Mar 2017 13:46:31 -0000 Hi Ilya, I have done similar tests and as you already pointed out, 'numactl --interleave' does not seem to work as expected. I have also checked that the issue can be reproduced with quota limit on hugetlbfs mount point. I would be inclined towards *adding libnuma as dependency* to DPDK to make memory allocation a bit more reliable. Currently at a high level regarding hugepages per numa node: 1) Try to map all free hugepages. The total number of mapped hugepages depends if there were any limits, such as cgroups or quota in mount point. 2) Find out numa node of each hugepage. 3) Check if we have enough hugepages for requested memory in each numa socket/node. Using libnuma we could try to allocate hugepages per numa: 1) Try to map as many hugepages from numa 0. 2) Check if we have enough hugepages for requested memory in numa 0. 3) Try to map as many hugepages from numa 1. 4) Check if we have enough hugepages for requested memory in numa 1. This approach would improve failing scenarios caused by limits but It would still not fix issues regarding non-contiguous hugepages (worst case each hugepage is a memseg). The non-contiguous hugepages issues are not as critical now that mempools can span over multiple memsegs/hugepages, but it is still a problem for any other library requiring big chunks of memory. Potentially if we were to add an option such as 'iommu-only' when all devices are bound to vfio-pci, we could have a reliable way to allocate hugepages by just requesting the number of pages from each numa. Thoughts? Sergio On 06/03/2017 09:34, Ilya Maximets wrote: > Hi all. > > So, what about this change? > > Best regards, Ilya Maximets. > > On 16.02.2017 16:01, Ilya Maximets wrote: >> Currently EAL allocates hugepages one by one not paying >> attention from which NUMA node allocation was done. >> >> Such behaviour leads to allocation failure if number of >> available hugepages for application limited by cgroups >> or hugetlbfs and memory requested not only from the first >> socket. >> >> Example: >> # 90 x 1GB hugepages availavle in a system >> >> cgcreate -g hugetlb:/test >> # Limit to 32GB of hugepages >> cgset -r hugetlb.1GB.limit_in_bytes=34359738368 test >> # Request 4GB from each of 2 sockets >> cgexec -g hugetlb:test testpmd --socket-mem=4096,4096 ... >> >> EAL: SIGBUS: Cannot mmap more hugepages of size 1024 MB >> EAL: 32 not 90 hugepages of size 1024 MB allocated >> EAL: Not enough memory available on socket 1! >> Requested: 4096MB, available: 0MB >> PANIC in rte_eal_init(): >> Cannot init memory >> >> This happens beacause all allocated pages are >> on socket 0. >> >> Fix this issue by setting mempolicy MPOL_PREFERRED for each >> hugepage to one of requested nodes in a round-robin fashion. >> In this case all allocated pages will be fairly distributed >> between all requested nodes. >> >> New config option RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES >> introduced and disabled by default because of external >> dependency from libnuma. >> >> Cc: >> Fixes: 77988fc08dc5 ("mem: fix allocating all free hugepages") >> >> Signed-off-by: Ilya Maximets >> --- >> config/common_base | 1 + >> lib/librte_eal/Makefile | 4 ++ >> lib/librte_eal/linuxapp/eal/eal_memory.c | 66 ++++++++++++++++++++++++++++++++ >> mk/rte.app.mk | 3 ++ >> 4 files changed, 74 insertions(+) >>