From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 3FDB22BB8 for ; Mon, 29 May 2017 12:34:53 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP; 29 May 2017 03:34:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,413,1491289200"; d="scan'208";a="862423993" Received: from smonroyx-mobl.ger.corp.intel.com (HELO [10.237.221.20]) ([10.237.221.20]) by FMSMGA003.fm.intel.com with ESMTP; 29 May 2017 03:34:51 -0700 To: "Dorsett, Michal" , "users@dpdk.org" References: From: Sergio Gonzalez Monroy Message-ID: Date: Mon, 29 May 2017 11:34:50 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-users] Allocating hugepages for all sockets on single numa X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 May 2017 10:34:53 -0000 Hi Michael, This looks very much like a VM configuration issue. Hopefully the following link would contain all the information and guidance that you need: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Virtualization_Tuning_and_Optimization_Guide/index.html Thanks, Sergio On 28/05/2017 21:58, Dorsett, Michal wrote: > Hi, > > I am running DPDK 2.0.0 on a RH 6.4 VM. > I have 512 2MB hugepages specified in my grub configuration file. > > When the eal is mapping hugepages for use by my application it creates them only for socket 0, which is not what I wish. I would like to use socket 1 & 2. > > I tried providing the --socket-mem parameter like so: > > --socket-mem=0,256,256 > > But to no avail. > > I see that the maps created for huge pages in /proc/self/maps are all N0=1, which explains why the hugepage_info specifies socket_id = 0. > > This is what I get when I run numactl --hardware > > available: 1 nodes (0) > node 0 cpus: 0 1 2 3 > node 0 size: 8191 MB > node 0 free: 5076 MB > node distances: > node 0 > 0: 10 > > Here is my lscpu output: > > Architecture: x86_64 > CPU op-mode(s): 32-bit, 64-bit > Byte Order: Little Endian > CPU(s): 4 > On-line CPU(s) list: 0-3 > Thread(s) per core: 1 > Core(s) per socket: 1 > Socket(s): 4 > NUMA node(s): 1 > Vendor ID: GenuineIntel > CPU family: 6 > Model: 37 > Stepping: 1 > CPU MHz: 2194.711 > BogoMIPS: 4389.42 > Hypervisor vendor: VMware > Virtualization type: full > L1d cache: 32K > L1i cache: 32K > L2 cache: 256K > L3 cache: 20480K > NUMA node0 CPU(s): 0-3 > > I would like to understand why hugepage mapping happens only on socket 0, and how I can make it map for the other sockets as well. > > Your assistance is much appreciated. > > Thanks, > > Michal Dorsett > Developer, Strategic IP Group > Desk: +972 962 4350 > Mobile: +972 50 771 6689 > Verint Cyber Intelligence > www.verint.com > > > > This electronic message may contain proprietary and confidential information of Verint Systems Inc., its affiliates and/or subsidiaries. The information is intended to be for the use of the individual(s) or entity(ies) named above. If you are not the intended recipient (or authorized to receive this e-mail for the intended recipient), you may not use, copy, disclose or distribute to anyone this message or any information contained in this message. If you have received this electronic message in error, please notify us by replying to this e-mail.