Hi Anatoly, We did not check it with "testpmd", only with our application. From the beginning, we did not enable this configuration (look at attached files), and everything works fine. Of course we rebuild DPDK, when we change configuration. Please note that we use DPDK 17.11.3, maybe this is why it works fine? Thanks, Asaf -----Original Message----- From: Burakov, Anatoly Sent: Monday, November 26, 2018 01:10 PM To: Asaf Sinai ; dev@dpdk.org Subject: Re: [dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no difference in memory pool allocations, when enabling/disabling this configuration On 26-Nov-18 9:15 AM, Asaf Sinai wrote: > Hi, > > We have 2 NUMAs in our system, and we try to allocate a single DPDK memory pool on each NUMA. > However, we see no difference when enabling/disabling "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration. > We expected that disabling it will allocate pools only on one NUMA (probably NUMA0), but it actually allocates pools on both NUMAs, according to "socket_id" parameter passed to "rte_mempool_create" API. > We have 192GB memory, so NUMA1 memory starts from address: 0x1800000000. > As you can see below, "undDpdkPoolNameSocket_1" was indeed allocated on NUMA1, as we wanted, although "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" is disabled: > > CONFIG_RTE_LIBRTE_VHOST_NUMA=n > CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n > > created poolName=undDpdkPoolNameSocket_0, nbufs=887808, bufferSize=2432, total=2059MB > (memZone: name=MP_undDpdkPoolNameSocket_0, socket_id=0, vaddr=0x1f2c0427d00-0x1f2c05abe00, paddr=0x178e627d00-0x178e7abe00, len=1589504, hugepage_sz=2MB) > created poolName=undDpdkPoolNameSocket_1, nbufs=887808, bufferSize=2432, total=2059MB > (memZone: name=MP_undDpdkPoolNameSocket_1, socket_id=1, vaddr=0x1f57fa7be40-0x1f57fbfff40, paddr=0x2f8247be40-0x2f825fff40, len=1589504, hugepage_sz=2MB) > > Does anyone know what is "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration used for? > > Thanks, > Asaf > Hi Asaf, I cannot reproduce this behavior. Just tried running testpmd with DPDK 18.08 as well as latest master [1], and DPDK could not successfully allocate a mempool on socket 1. Did you reconfigure and recompile DPDK after this config change? [1] Latest master will crash on init in this configuration, fix: https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatches.dpdk.org%2Fpatch%2F48338%2F&data=02%7C01%7CAsafSi%40radware.com%7C8abb9fa1f2534a424b8e08d6538fb6ef%7C6ae4e000b5d04f48a766402d46119b76%7C0%7C0%7C636788274062104056&sdata=LvREwJCBJ25pQ2va8r6US%2F%2B4fPcUQCjPl6cfuc%2B0gGA%3D&reserved=0 -- Thanks, Anatoly