From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id CDFF61B5A6 for ; Mon, 26 Nov 2018 13:50:44 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Nov 2018 04:50:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,282,1539673200"; d="scan'208";a="252683481" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.251.82.86]) ([10.251.82.86]) by orsmga004.jf.intel.com with ESMTP; 26 Nov 2018 04:50:42 -0800 To: Asaf Sinai , "dev@dpdk.org" , Ilya Maximets , Thomas Monjalon References: <2b09cec8-0883-2ed2-0264-aeef871ea6a9@intel.com> <518f9333-8d80-0fa2-d391-b4c8df181508@intel.com> From: "Burakov, Anatoly" Message-ID: <12283bd1-ea0d-38d1-f64d-508596e48cd9@intel.com> Date: Mon, 26 Nov 2018 12:50:41 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <518f9333-8d80-0fa2-d391-b4c8df181508@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no difference in memory pool allocations, when enabling/disabling this configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Nov 2018 12:50:45 -0000 On 26-Nov-18 11:43 AM, Burakov, Anatoly wrote: > On 26-Nov-18 11:33 AM, Asaf Sinai wrote: >> Hi Anatoly, >> >> We did not check it with "testpmd", only with our application. >>  From the beginning, we did not enable this configuration (look at >> attached files), and everything works fine. >> Of course we rebuild DPDK, when we change configuration. >> Please note that we use DPDK 17.11.3, maybe this is why it works fine? > > Just tested with DPDK 17.11, and yes, it does work the way you are > describing. This is not intended behavior. I will look into it. > +CC author of commit introducing CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES. Looking at the code, i think this config option needs to be reworked and we should clarify what we mean by this option. It appears that i've misunderstood what this option actually intended to do, and i also think it's naming could be improved because it's confusing and misleading. In 17.11, this option does *not* prevent EAL from using NUMA - it merely disables using libnuma to perform memory allocation. This looks like intended (if counter-intuitive) behavior - disabling this option will simply revert DPDK to working as it did before this option was introduced (i.e. best-effort allocation). This is why your code still works - because EAL still does allocate memory on socket 1, and *knows* that it's socket 1 memory. It still supports NUMA. The commit message for these changes states that the actual purpose of this option is to enable "balanced" hugepage allocation. In case of cgroups limitations, previously, DPDK would've exhausted all hugepages on master core's socket before attempting to allocate from other sockets, but by the time we've reached cgroups limits on numbers of hugepages, we might not have reached socket 1 and thus missed out on the pages we could've allocated, but didn't. Using libnuma solves this issue, because now we can allocate pages on sockets we want, instead of hoping we won't run out of hugepages before we get the memory we need. In 18.05 onwards, this option works differently (and arguably wrong). More specifically, it disallows allocations on sockets other than 0, and it also makes it so that EAL does not check which socket the memory *actually* came from. So, not only allocating memory from socket 1 is disabled, but allocating from socket 0 may even get you memory from socket 1! +CC Thomas The CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES option is a misnomer, because it makes it seem like this option disables NUMA support, which is not the case. I would also argue that it is not relevant to 18.05+ memory subsystem, and should only work in legacy mode, because it is *impossible* to make it work right in the new memory subsystem, and here's why: Without libnuma, we have no way of "asking" the kernel to allocate a hugepage on a specific socket - instead, any allocation will most likely happen on socket from which the allocation came from. For example, if user program's lcore is on socket 1, allocation on socket 0 will actually allocate a page on socket 1. If we don't check for page's NUMA node affinity (which is what currently happens) - we get performance degradation because we may unintentionally allocate memory on wrong NUMA node. If we do check for this - then allocation of memory on socket 1 from lcore on socket 0 will almost never succeed, because kernel will always give us pages on socket 0. Put it simply, there is no sane way to make this option work for the new memory subsystem - IMO it should be dropped, and libnuma should be made a hard dependency on Linux. -- Thanks, Anatoly