DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Allocating hugepages for all sockets on single numa
@ 2017-05-28 20:58 Dorsett, Michal
  2017-05-29 10:34 ` Sergio Gonzalez Monroy
  0 siblings, 1 reply; 2+ messages in thread
From: Dorsett, Michal @ 2017-05-28 20:58 UTC (permalink / raw)
  To: users

Hi,

I am running DPDK 2.0.0 on a RH 6.4 VM.
I have 512 2MB hugepages specified in my grub configuration file.

When the eal is mapping hugepages for use by my application it creates them only for socket 0, which is not what I wish. I would like to use socket 1 & 2.

I tried providing the --socket-mem parameter like so:

--socket-mem=0,256,256

But to no avail.

I see that the maps created for huge pages in /proc/self/maps are all N0=1, which explains why the hugepage_info specifies socket_id = 0.

This is what I get when I run numactl --hardware

available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 8191 MB
node 0 free: 5076 MB
node distances:
node   0
  0:  10

Here is my lscpu output:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 37
Stepping:              1
CPU MHz:               2194.711
BogoMIPS:              4389.42
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              20480K
NUMA node0 CPU(s):     0-3

I would like to understand why hugepage mapping happens only on socket 0, and how I can make it map for the other sockets as well.

Your assistance is much appreciated.

Thanks,

Michal Dorsett
Developer, Strategic IP Group
Desk: +972 962 4350
Mobile: +972 50 771 6689
Verint Cyber Intelligence
www.verint.com<http://www.verint.com/>



This electronic message may contain proprietary and confidential information of Verint Systems Inc., its affiliates and/or subsidiaries. The information is intended to be for the use of the individual(s) or entity(ies) named above. If you are not the intended recipient (or authorized to receive this e-mail for the intended recipient), you may not use, copy, disclose or distribute to anyone this message or any information contained in this message. If you have received this electronic message in error, please notify us by replying to this e-mail.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-users] Allocating hugepages for all sockets on single numa
  2017-05-28 20:58 [dpdk-users] Allocating hugepages for all sockets on single numa Dorsett, Michal
@ 2017-05-29 10:34 ` Sergio Gonzalez Monroy
  0 siblings, 0 replies; 2+ messages in thread
From: Sergio Gonzalez Monroy @ 2017-05-29 10:34 UTC (permalink / raw)
  To: Dorsett, Michal, users

Hi Michael,

This looks very much like a VM configuration issue.

Hopefully the following link would contain all the information and 
guidance that you need:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Virtualization_Tuning_and_Optimization_Guide/index.html

Thanks,
Sergio

On 28/05/2017 21:58, Dorsett, Michal wrote:
> Hi,
>
> I am running DPDK 2.0.0 on a RH 6.4 VM.
> I have 512 2MB hugepages specified in my grub configuration file.
>
> When the eal is mapping hugepages for use by my application it creates them only for socket 0, which is not what I wish. I would like to use socket 1 & 2.
>
> I tried providing the --socket-mem parameter like so:
>
> --socket-mem=0,256,256
>
> But to no avail.
>
> I see that the maps created for huge pages in /proc/self/maps are all N0=1, which explains why the hugepage_info specifies socket_id = 0.
>
> This is what I get when I run numactl --hardware
>
> available: 1 nodes (0)
> node 0 cpus: 0 1 2 3
> node 0 size: 8191 MB
> node 0 free: 5076 MB
> node distances:
> node   0
>    0:  10
>
> Here is my lscpu output:
>
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                4
> On-line CPU(s) list:   0-3
> Thread(s) per core:    1
> Core(s) per socket:    1
> Socket(s):             4
> NUMA node(s):          1
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 37
> Stepping:              1
> CPU MHz:               2194.711
> BogoMIPS:              4389.42
> Hypervisor vendor:     VMware
> Virtualization type:   full
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              256K
> L3 cache:              20480K
> NUMA node0 CPU(s):     0-3
>
> I would like to understand why hugepage mapping happens only on socket 0, and how I can make it map for the other sockets as well.
>
> Your assistance is much appreciated.
>
> Thanks,
>
> Michal Dorsett
> Developer, Strategic IP Group
> Desk: +972 962 4350
> Mobile: +972 50 771 6689
> Verint Cyber Intelligence
> www.verint.com<http://www.verint.com/>
>
>
>
> This electronic message may contain proprietary and confidential information of Verint Systems Inc., its affiliates and/or subsidiaries. The information is intended to be for the use of the individual(s) or entity(ies) named above. If you are not the intended recipient (or authorized to receive this e-mail for the intended recipient), you may not use, copy, disclose or distribute to anyone this message or any information contained in this message. If you have received this electronic message in error, please notify us by replying to this e-mail.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-05-29 10:34 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-28 20:58 [dpdk-users] Allocating hugepages for all sockets on single numa Dorsett, Michal
2017-05-29 10:34 ` Sergio Gonzalez Monroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).