From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 7D38D23D for ; Mon, 26 Nov 2018 12:10:02 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Nov 2018 03:10:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,281,1539673200"; d="scan'208";a="252658131" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.251.82.86]) ([10.251.82.86]) by orsmga004.jf.intel.com with ESMTP; 26 Nov 2018 03:09:59 -0800 To: Asaf Sinai , "dev@dpdk.org" References: From: "Burakov, Anatoly" Message-ID: <2b09cec8-0883-2ed2-0264-aeef871ea6a9@intel.com> Date: Mon, 26 Nov 2018 11:09:58 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no difference in memory pool allocations, when enabling/disabling this configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Nov 2018 11:10:02 -0000 On 26-Nov-18 9:15 AM, Asaf Sinai wrote: > Hi, > > We have 2 NUMAs in our system, and we try to allocate a single DPDK memory pool on each NUMA. > However, we see no difference when enabling/disabling "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration. > We expected that disabling it will allocate pools only on one NUMA (probably NUMA0), but it actually allocates pools on both NUMAs, according to "socket_id" parameter passed to "rte_mempool_create" API. > We have 192GB memory, so NUMA1 memory starts from address: 0x1800000000. > As you can see below, "undDpdkPoolNameSocket_1" was indeed allocated on NUMA1, as we wanted, although "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" is disabled: > > CONFIG_RTE_LIBRTE_VHOST_NUMA=n > CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n > > created poolName=undDpdkPoolNameSocket_0, nbufs=887808, bufferSize=2432, total=2059MB > (memZone: name=MP_undDpdkPoolNameSocket_0, socket_id=0, vaddr=0x1f2c0427d00-0x1f2c05abe00, paddr=0x178e627d00-0x178e7abe00, len=1589504, hugepage_sz=2MB) > created poolName=undDpdkPoolNameSocket_1, nbufs=887808, bufferSize=2432, total=2059MB > (memZone: name=MP_undDpdkPoolNameSocket_1, socket_id=1, vaddr=0x1f57fa7be40-0x1f57fbfff40, paddr=0x2f8247be40-0x2f825fff40, len=1589504, hugepage_sz=2MB) > > Does anyone know what is "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration used for? > > Thanks, > Asaf > Hi Asaf, I cannot reproduce this behavior. Just tried running testpmd with DPDK 18.08 as well as latest master [1], and DPDK could not successfully allocate a mempool on socket 1. Did you reconfigure and recompile DPDK after this config change? [1] Latest master will crash on init in this configuration, fix: http://patches.dpdk.org/patch/48338/ -- Thanks, Anatoly