DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Unable to setup hugepages
@ 2021-05-31 15:35 Gabriel Danjon
  2021-06-01  7:58 ` Thomas Monjalon
  0 siblings, 1 reply; 3+ messages in thread
From: Gabriel Danjon @ 2021-05-31 15:35 UTC (permalink / raw)
  To: users
  Cc: Alexis DANJON, Antoine LORIN, Laurent CHABENET, gregory.fresnais,
	Julien RAMET

Hello,

After successfully installed the DPDK 20.11 on my Centos 8-Stream 
(minimal), I am trying to configure the hugepages but encounters a lot 
of difficulties.


I am trying to reserve 4 hugepages of 1GB.


Here the steps I have done following the documentation 
(https://doc.dpdk.org/guides-20.11/linux_gsg/sys_reqs.html):

Additional information about meminfo :

cat /proc/meminfo
MemTotal:       32619404 kB
MemFree:        27331024 kB
MemAvailable:   27415524 kB
Buffers:            4220 kB
Cached:           328628 kB
SwapCached:            0 kB
Active:           194828 kB
Inactive:         210156 kB
Active(anon):       1744 kB
Inactive(anon):    83384 kB
Active(file):     193084 kB
Inactive(file):   126772 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:      16474108 kB
SwapFree:       16474108 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         72136 kB
Mapped:            84016 kB
Shmem:             12992 kB
KReclaimable:     211956 kB
Slab:             372852 kB
SReclaimable:     211956 kB
SUnreclaim:       160896 kB
KernelStack:        9120 kB
PageTables:         6852 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    30686656 kB
Committed_AS:     270424 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
Percpu:            28416 kB
HardwareCorrupted:     0 kB
AnonHugePages:     10240 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         4194304 kB
DirectMap4k:      225272 kB
DirectMap2M:     4919296 kB
DirectMap1G:    30408704 kB

1 Step follow documentation

bash -c 'echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'

As we're working on a NUMA machine we do this too. (We even do the 
previous step because without it, it provides more errors)

bash -c 'echo 2048 > 
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \
bash -c 'echo 2048 > 
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages'

mkdir /mnt/huge
mount -t hugetlbfs pagesize=1GB /mnt/huge

bash -c 'echo nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 >> /etc/fstab'

So here the result of my meminfo (cat /proc/meminfo | grep Huge) :

AnonHugePages:     10240 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         4194304 kB

It looks strange that there is no total and free hugepages.

I tried the dpdk-testpmd using the DPDK documentation : dpdk-testpmd -l 
0-3 -n 4 -- -i --nb-cores=2

EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
found for that size
EAL: No free hugepages reported in hugepages-1048576kB
EAL: No free hugepages reported in hugepages-1048576kB
EAL: No available hugepages reported in hugepages-1048576kB
EAL: FATAL: Cannot get hugepage information.
EAL: Cannot get hugepage information.
EAL: Error - exiting with code: 1
   Cause: Cannot init EAL: Permission denied


So I checked in the /mnt/huge to look if files had been created (ls 
/mnt/huge/ -la) : Empty folder

Then I checked if my folder was correctly mounted : mount | grep huge
pagesize=1GB on /mnt/huge type hugetlbfs 
(rw,relatime,seclabel,pagesize=1024M)

Then I tried the helloworld example (make clean && make && 
./build/helloworld):

EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
found for that size
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No free 1048576 kB hugepages reported on node 1
EAL: No available 1048576 kB hugepages reported
EAL: FATAL: Cannot get hugepage information.
EAL: Cannot get hugepage information.
PANIC in main():
Cannot init EAL
5: [./build/helloworld() [0x40079e]]
4: [/lib64/libc.so.6(__libc_start_main+0xf3) [0x7ff43a6f6493]]
3: [./build/helloworld() [0x4006e6]]
2: [/usr/local/lib64/librte_eal.so.21(__rte_panic+0xba) [0x7ff43aaa4b93]]
1: [/usr/local/lib64/librte_eal.so.21(rte_dump_stack+0x1b) [0x7ff43aac79fb]]
Aborted (core dumped)


So I guessed the problem came from the : Hugepagesize:    1048576 kB 
(from cat /proc/meminfo | grep Huge).


2 Step adapt documentation

Then I decided to set the values for 1048576KB:

bash -c 'echo 4 > 
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages'
bash -c 'echo 4 > 
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages'
bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'


So here the result of my meminfo (cat /proc/meminfo | grep Huge) :

AnonHugePages:     10240 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       4
HugePages_Free:        4
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         8388608 kB

So here I have my 4 pages sat.

Then I retried the previous steps and here what I got :

dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
found for that size
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
testpmd>
Bye...


make clean && make && ./build/helloworld
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
found for that size
TELEMETRY: No legacy callbacks, legacy socket not created


cat /proc/meminfo | grep Huge
AnonHugePages:     10240 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       4
HugePages_Free:        3
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         8388608 kB

One huge page looks like have been used.
ls -l /mnt/huge/
total 1048576
1073741824 rtemap_0

So yes one has been created, but 2048 hugepages of size 2097152 
reserved, but no mounted hugetlbfs found for that size, happens.

So to try to understand what happens I reset 
hugepages-2048kB/nr_hugepages to 0 :
bash -c 'echo 0 > 
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \
bash -c 'echo 0 > 
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages' && \
bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'

but : dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done

So here on this part I really don't understand, my /proc/meminfo tells 
me I need to use the 1048576kB part but dpdk-testpmd the 2048kB.

Then I searched an alternative to these commands in your documentation 
and found : dpdk-hugpages.py


3 Step alternative

https://doc.dpdk.org/guides-20.11/tools/hugepages.html
(There is an error on the documentation : dpdk-hugpages instead of 
dpdk-hugepages)

So I reset every files, and removed my mount and my folder.

umount /mnt/huge
rm -rf /mnt/huge
bash -c 'echo 0 > 
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages'
bash -c 'echo 0 > 
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages'
bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'

cat /proc/meminfo | grep Huge
AnonHugePages:     10240 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:               0 kB


dpdk-hugpages.py -s

Node Pages Size Total

Hugepages not mounted


So here I have an cleaned hugepage environment.
Then I tried to reallocate hugepages with the python script: 
dpdk-hugepages.py -p 1G --setup 4G

dpdk-hugepages.py -s
Node Pages Size Total
0    4     1Gb    4Gb
1    4     1Gb    4G


So I got my 4 pages of 1GB and retried the previous steps :

cat /proc/meminfo | grep Huge
AnonHugePages:     10240 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       8
HugePages_Free:        8
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         8388608 kB

Here it says I got 8 hugepages of 1GB so I don't understand why, because 
the python script tells the opposite.

dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done

Same for the helloworld.

Then I cleared my environment:

dpdk-hugepages.py -u && dpdk-hugepages.py -c && dpdk-hugepages.py -s
Node Pages Size Total

Hugepages not mounted


Then as the error says that there are no available hugepages reported in 
hugepages-2048kB i tried with Mb :
dpdk-hugepages.py -p 1024M --setup 4G && dpdk-hugepages.py -s
Node Pages Size Total
0    4     1Gb    4Gb
1    4     1Gb    4Gb

But same error happened.


4 Question

So I don't succeed to resolve this issue of testing the DPDK with 
helloworld and dpdk-testpmd.

Have I missed something in the creation of the hugepages ?

Could you please, provide help ?

Best,


-- 

Gabriel Danjon


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Unable to setup hugepages
  2021-05-31 15:35 [dpdk-users] Unable to setup hugepages Gabriel Danjon
@ 2021-06-01  7:58 ` Thomas Monjalon
  2021-06-02 15:35   ` Gabriel Danjon
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Monjalon @ 2021-06-01  7:58 UTC (permalink / raw)
  To: Gabriel Danjon
  Cc: users, Alexis DANJON, Antoine LORIN, Laurent CHABENET,
	gregory.fresnais, Julien RAMET

31/05/2021 17:35, Gabriel Danjon:
> Hello,
> 
> After successfully installed the DPDK 20.11 on my Centos 8-Stream 
> (minimal), I am trying to configure the hugepages but encounters a lot 
> of difficulties.

There's some confusing info below.
Let's forget all the details and focus on simple things:
	1/ use dpdk-hugepages.py
	2/ choose one page size (2M or 1G)
	3/ check which node requires memory with lstopo
	4/ don't be confused with warnings about unused page size



> I am trying to reserve 4 hugepages of 1GB.
> 
> 
> Here the steps I have done following the documentation 
> (https://doc.dpdk.org/guides-20.11/linux_gsg/sys_reqs.html):
> 
> Additional information about meminfo :
> 
> cat /proc/meminfo
> MemTotal:       32619404 kB
> MemFree:        27331024 kB
> MemAvailable:   27415524 kB
> Buffers:            4220 kB
> Cached:           328628 kB
> SwapCached:            0 kB
> Active:           194828 kB
> Inactive:         210156 kB
> Active(anon):       1744 kB
> Inactive(anon):    83384 kB
> Active(file):     193084 kB
> Inactive(file):   126772 kB
> Unevictable:           0 kB
> Mlocked:               0 kB
> SwapTotal:      16474108 kB
> SwapFree:       16474108 kB
> Dirty:                 0 kB
> Writeback:             0 kB
> AnonPages:         72136 kB
> Mapped:            84016 kB
> Shmem:             12992 kB
> KReclaimable:     211956 kB
> Slab:             372852 kB
> SReclaimable:     211956 kB
> SUnreclaim:       160896 kB
> KernelStack:        9120 kB
> PageTables:         6852 kB
> NFS_Unstable:          0 kB
> Bounce:                0 kB
> WritebackTmp:          0 kB
> CommitLimit:    30686656 kB
> Committed_AS:     270424 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:           0 kB
> VmallocChunk:          0 kB
> Percpu:            28416 kB
> HardwareCorrupted:     0 kB
> AnonHugePages:     10240 kB
> ShmemHugePages:        0 kB
> ShmemPmdMapped:        0 kB
> FileHugePages:         0 kB
> FilePmdMapped:         0 kB
> HugePages_Total:       0
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
> Hugetlb:         4194304 kB
> DirectMap4k:      225272 kB
> DirectMap2M:     4919296 kB
> DirectMap1G:    30408704 kB
> 
> 1 Step follow documentation
> 
> bash -c 'echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
> 
> As we're working on a NUMA machine we do this too. (We even do the 
> previous step because without it, it provides more errors)
> 
> bash -c 'echo 2048 > 
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \
> bash -c 'echo 2048 > 
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages'
> 
> mkdir /mnt/huge
> mount -t hugetlbfs pagesize=1GB /mnt/huge
> 
> bash -c 'echo nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 >> /etc/fstab'
> 
> So here the result of my meminfo (cat /proc/meminfo | grep Huge) :
> 
> AnonHugePages:     10240 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:       0
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
> Hugetlb:         4194304 kB
> 
> It looks strange that there is no total and free hugepages.
> 
> I tried the dpdk-testpmd using the DPDK documentation : dpdk-testpmd -l 
> 0-3 -n 4 -- -i --nb-cores=2
> 
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
> found for that size
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: No available hugepages reported in hugepages-1048576kB
> EAL: FATAL: Cannot get hugepage information.
> EAL: Cannot get hugepage information.
> EAL: Error - exiting with code: 1
>    Cause: Cannot init EAL: Permission denied
> 
> 
> So I checked in the /mnt/huge to look if files had been created (ls 
> /mnt/huge/ -la) : Empty folder
> 
> Then I checked if my folder was correctly mounted : mount | grep huge
> pagesize=1GB on /mnt/huge type hugetlbfs 
> (rw,relatime,seclabel,pagesize=1024M)
> 
> Then I tried the helloworld example (make clean && make && 
> ./build/helloworld):
> 
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
> found for that size
> EAL: No free 1048576 kB hugepages reported on node 0
> EAL: No free 1048576 kB hugepages reported on node 1
> EAL: No available 1048576 kB hugepages reported
> EAL: FATAL: Cannot get hugepage information.
> EAL: Cannot get hugepage information.
> PANIC in main():
> Cannot init EAL
> 5: [./build/helloworld() [0x40079e]]
> 4: [/lib64/libc.so.6(__libc_start_main+0xf3) [0x7ff43a6f6493]]
> 3: [./build/helloworld() [0x4006e6]]
> 2: [/usr/local/lib64/librte_eal.so.21(__rte_panic+0xba) [0x7ff43aaa4b93]]
> 1: [/usr/local/lib64/librte_eal.so.21(rte_dump_stack+0x1b) [0x7ff43aac79fb]]
> Aborted (core dumped)
> 
> 
> So I guessed the problem came from the : Hugepagesize:    1048576 kB 
> (from cat /proc/meminfo | grep Huge).
> 
> 
> 2 Step adapt documentation
> 
> Then I decided to set the values for 1048576KB:
> 
> bash -c 'echo 4 > 
> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages'
> bash -c 'echo 4 > 
> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages'
> bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'
> 
> 
> So here the result of my meminfo (cat /proc/meminfo | grep Huge) :
> 
> AnonHugePages:     10240 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:       4
> HugePages_Free:        4
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
> Hugetlb:         8388608 kB
> 
> So here I have my 4 pages sat.
> 
> Then I retried the previous steps and here what I got :
> 
> dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
> found for that size
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> testpmd>
> Bye...
> 
> 
> make clean && make && ./build/helloworld
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs 
> found for that size
> TELEMETRY: No legacy callbacks, legacy socket not created
> 
> 
> cat /proc/meminfo | grep Huge
> AnonHugePages:     10240 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:       4
> HugePages_Free:        3
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
> Hugetlb:         8388608 kB
> 
> One huge page looks like have been used.
> ls -l /mnt/huge/
> total 1048576
> 1073741824 rtemap_0
> 
> So yes one has been created, but 2048 hugepages of size 2097152 
> reserved, but no mounted hugetlbfs found for that size, happens.
> 
> So to try to understand what happens I reset 
> hugepages-2048kB/nr_hugepages to 0 :
> bash -c 'echo 0 > 
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \
> bash -c 'echo 0 > 
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages' && \
> bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
> 
> but : dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> 
> So here on this part I really don't understand, my /proc/meminfo tells 
> me I need to use the 1048576kB part but dpdk-testpmd the 2048kB.
> 
> Then I searched an alternative to these commands in your documentation 
> and found : dpdk-hugpages.py
> 
> 
> 3 Step alternative
> 
> https://doc.dpdk.org/guides-20.11/tools/hugepages.html
> (There is an error on the documentation : dpdk-hugpages instead of 
> dpdk-hugepages)
> 
> So I reset every files, and removed my mount and my folder.
> 
> umount /mnt/huge
> rm -rf /mnt/huge
> bash -c 'echo 0 > 
> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages'
> bash -c 'echo 0 > 
> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages'
> bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'
> 
> cat /proc/meminfo | grep Huge
> AnonHugePages:     10240 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:       0
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
> Hugetlb:               0 kB
> 
> 
> dpdk-hugpages.py -s
> 
> Node Pages Size Total
> 
> Hugepages not mounted
> 
> 
> So here I have an cleaned hugepage environment.
> Then I tried to reallocate hugepages with the python script: 
> dpdk-hugepages.py -p 1G --setup 4G
> 
> dpdk-hugepages.py -s
> Node Pages Size Total
> 0    4     1Gb    4Gb
> 1    4     1Gb    4G
> 
> 
> So I got my 4 pages of 1GB and retried the previous steps :
> 
> cat /proc/meminfo | grep Huge
> AnonHugePages:     10240 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:       8
> HugePages_Free:        8
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
> Hugetlb:         8388608 kB
> 
> Here it says I got 8 hugepages of 1GB so I don't understand why, because 
> the python script tells the opposite.
> 
> dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> 
> Same for the helloworld.
> 
> Then I cleared my environment:
> 
> dpdk-hugepages.py -u && dpdk-hugepages.py -c && dpdk-hugepages.py -s
> Node Pages Size Total
> 
> Hugepages not mounted
> 
> 
> Then as the error says that there are no available hugepages reported in 
> hugepages-2048kB i tried with Mb :
> dpdk-hugepages.py -p 1024M --setup 4G && dpdk-hugepages.py -s
> Node Pages Size Total
> 0    4     1Gb    4Gb
> 1    4     1Gb    4Gb
> 
> But same error happened.
> 
> 
> 4 Question
> 
> So I don't succeed to resolve this issue of testing the DPDK with 
> helloworld and dpdk-testpmd.
> 
> Have I missed something in the creation of the hugepages ?
> 
> Could you please, provide help ?
> 
> Best,
> 
> 
> 






^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Unable to setup hugepages
  2021-06-01  7:58 ` Thomas Monjalon
@ 2021-06-02 15:35   ` Gabriel Danjon
  0 siblings, 0 replies; 3+ messages in thread
From: Gabriel Danjon @ 2021-06-02 15:35 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: users, Alexis DANJON, Antoine LORIN, Laurent CHABENET,
	gregory.fresnais, Julien RAMET

Hello,

After looking at the hugepage_info_init function from 
dpdk-20.11/lib/librte_eal/linux/eal_hugepage_info.c we finally 
understood why we could skip the warning.


Thanks to your help, we manage to generate traffic using the testpmd.


Gabriel Danjon

Cyber Test Systems

On 6/1/21 9:58 AM, Thomas Monjalon wrote:
> 31/05/2021 17:35, Gabriel Danjon:
>> Hello,
>>
>> After successfully installed the DPDK 20.11 on my Centos 8-Stream
>> (minimal), I am trying to configure the hugepages but encounters a lot
>> of difficulties.
> There's some confusing info below.
> Let's forget all the details and focus on simple things:
> 	1/ use dpdk-hugepages.py
> 	2/ choose one page size (2M or 1G)
> 	3/ check which node requires memory with lstopo
> 	4/ don't be confused with warnings about unused page size
>
>
>
>> I am trying to reserve 4 hugepages of 1GB.
>>
>>
>> Here the steps I have done following the documentation
>> (https://doc.dpdk.org/guides-20.11/linux_gsg/sys_reqs.html):
>>
>> Additional information about meminfo :
>>
>> cat /proc/meminfo
>> MemTotal:       32619404 kB
>> MemFree:        27331024 kB
>> MemAvailable:   27415524 kB
>> Buffers:            4220 kB
>> Cached:           328628 kB
>> SwapCached:            0 kB
>> Active:           194828 kB
>> Inactive:         210156 kB
>> Active(anon):       1744 kB
>> Inactive(anon):    83384 kB
>> Active(file):     193084 kB
>> Inactive(file):   126772 kB
>> Unevictable:           0 kB
>> Mlocked:               0 kB
>> SwapTotal:      16474108 kB
>> SwapFree:       16474108 kB
>> Dirty:                 0 kB
>> Writeback:             0 kB
>> AnonPages:         72136 kB
>> Mapped:            84016 kB
>> Shmem:             12992 kB
>> KReclaimable:     211956 kB
>> Slab:             372852 kB
>> SReclaimable:     211956 kB
>> SUnreclaim:       160896 kB
>> KernelStack:        9120 kB
>> PageTables:         6852 kB
>> NFS_Unstable:          0 kB
>> Bounce:                0 kB
>> WritebackTmp:          0 kB
>> CommitLimit:    30686656 kB
>> Committed_AS:     270424 kB
>> VmallocTotal:   34359738367 kB
>> VmallocUsed:           0 kB
>> VmallocChunk:          0 kB
>> Percpu:            28416 kB
>> HardwareCorrupted:     0 kB
>> AnonHugePages:     10240 kB
>> ShmemHugePages:        0 kB
>> ShmemPmdMapped:        0 kB
>> FileHugePages:         0 kB
>> FilePmdMapped:         0 kB
>> HugePages_Total:       0
>> HugePages_Free:        0
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:    1048576 kB
>> Hugetlb:         4194304 kB
>> DirectMap4k:      225272 kB
>> DirectMap2M:     4919296 kB
>> DirectMap1G:    30408704 kB
>>
>> 1 Step follow documentation
>>
>> bash -c 'echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
>>
>> As we're working on a NUMA machine we do this too. (We even do the
>> previous step because without it, it provides more errors)
>>
>> bash -c 'echo 2048 >
>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \
>> bash -c 'echo 2048 >
>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages'
>>
>> mkdir /mnt/huge
>> mount -t hugetlbfs pagesize=1GB /mnt/huge
>>
>> bash -c 'echo nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 >> /etc/fstab'
>>
>> So here the result of my meminfo (cat /proc/meminfo | grep Huge) :
>>
>> AnonHugePages:     10240 kB
>> ShmemHugePages:        0 kB
>> FileHugePages:         0 kB
>> HugePages_Total:       0
>> HugePages_Free:        0
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:    1048576 kB
>> Hugetlb:         4194304 kB
>>
>> It looks strange that there is no total and free hugepages.
>>
>> I tried the dpdk-testpmd using the DPDK documentation : dpdk-testpmd -l
>> 0-3 -n 4 -- -i --nb-cores=2
>>
>> EAL: Detected 48 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>> found for that size
>> EAL: No free hugepages reported in hugepages-1048576kB
>> EAL: No free hugepages reported in hugepages-1048576kB
>> EAL: No available hugepages reported in hugepages-1048576kB
>> EAL: FATAL: Cannot get hugepage information.
>> EAL: Cannot get hugepage information.
>> EAL: Error - exiting with code: 1
>>     Cause: Cannot init EAL: Permission denied
>>
>>
>> So I checked in the /mnt/huge to look if files had been created (ls
>> /mnt/huge/ -la) : Empty folder
>>
>> Then I checked if my folder was correctly mounted : mount | grep huge
>> pagesize=1GB on /mnt/huge type hugetlbfs
>> (rw,relatime,seclabel,pagesize=1024M)
>>
>> Then I tried the helloworld example (make clean && make &&
>> ./build/helloworld):
>>
>> EAL: Detected 48 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Detected shared linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>> found for that size
>> EAL: No free 1048576 kB hugepages reported on node 0
>> EAL: No free 1048576 kB hugepages reported on node 1
>> EAL: No available 1048576 kB hugepages reported
>> EAL: FATAL: Cannot get hugepage information.
>> EAL: Cannot get hugepage information.
>> PANIC in main():
>> Cannot init EAL
>> 5: [./build/helloworld() [0x40079e]]
>> 4: [/lib64/libc.so.6(__libc_start_main+0xf3) [0x7ff43a6f6493]]
>> 3: [./build/helloworld() [0x4006e6]]
>> 2: [/usr/local/lib64/librte_eal.so.21(__rte_panic+0xba) [0x7ff43aaa4b93]]
>> 1: [/usr/local/lib64/librte_eal.so.21(rte_dump_stack+0x1b) [0x7ff43aac79fb]]
>> Aborted (core dumped)
>>
>>
>> So I guessed the problem came from the : Hugepagesize:    1048576 kB
>> (from cat /proc/meminfo | grep Huge).
>>
>>
>> 2 Step adapt documentation
>>
>> Then I decided to set the values for 1048576KB:
>>
>> bash -c 'echo 4 >
>> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages'
>> bash -c 'echo 4 >
>> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages'
>> bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'
>>
>>
>> So here the result of my meminfo (cat /proc/meminfo | grep Huge) :
>>
>> AnonHugePages:     10240 kB
>> ShmemHugePages:        0 kB
>> FileHugePages:         0 kB
>> HugePages_Total:       4
>> HugePages_Free:        4
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:    1048576 kB
>> Hugetlb:         8388608 kB
>>
>> So here I have my 4 pages sat.
>>
>> Then I retried the previous steps and here what I got :
>>
>> dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2
>> EAL: Detected 48 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>> found for that size
>> EAL: Probing VFIO support...
>> testpmd: No probed ethernet devices
>> Interactive-mode selected
>> testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc
>> Done
>> testpmd>
>> Bye...
>>
>>
>> make clean && make && ./build/helloworld
>> EAL: Detected 48 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Detected shared linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
>> found for that size
>> TELEMETRY: No legacy callbacks, legacy socket not created
>>
>>
>> cat /proc/meminfo | grep Huge
>> AnonHugePages:     10240 kB
>> ShmemHugePages:        0 kB
>> FileHugePages:         0 kB
>> HugePages_Total:       4
>> HugePages_Free:        3
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:    1048576 kB
>> Hugetlb:         8388608 kB
>>
>> One huge page looks like have been used.
>> ls -l /mnt/huge/
>> total 1048576
>> 1073741824 rtemap_0
>>
>> So yes one has been created, but 2048 hugepages of size 2097152
>> reserved, but no mounted hugetlbfs found for that size, happens.
>>
>> So to try to understand what happens I reset
>> hugepages-2048kB/nr_hugepages to 0 :
>> bash -c 'echo 0 >
>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \
>> bash -c 'echo 0 >
>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages' && \
>> bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
>>
>> but : dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: No available hugepages reported in hugepages-2048kB
>> EAL: Probing VFIO support...
>> testpmd: No probed ethernet devices
>> Interactive-mode selected
>> testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc
>> Done
>>
>> So here on this part I really don't understand, my /proc/meminfo tells
>> me I need to use the 1048576kB part but dpdk-testpmd the 2048kB.
>>
>> Then I searched an alternative to these commands in your documentation
>> and found : dpdk-hugpages.py
>>
>>
>> 3 Step alternative
>>
>> https://doc.dpdk.org/guides-20.11/tools/hugepages.html
>> (There is an error on the documentation : dpdk-hugpages instead of
>> dpdk-hugepages)
>>
>> So I reset every files, and removed my mount and my folder.
>>
>> umount /mnt/huge
>> rm -rf /mnt/huge
>> bash -c 'echo 0 >
>> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages'
>> bash -c 'echo 0 >
>> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages'
>> bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'
>>
>> cat /proc/meminfo | grep Huge
>> AnonHugePages:     10240 kB
>> ShmemHugePages:        0 kB
>> FileHugePages:         0 kB
>> HugePages_Total:       0
>> HugePages_Free:        0
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:    1048576 kB
>> Hugetlb:               0 kB
>>
>>
>> dpdk-hugpages.py -s
>>
>> Node Pages Size Total
>>
>> Hugepages not mounted
>>
>>
>> So here I have an cleaned hugepage environment.
>> Then I tried to reallocate hugepages with the python script:
>> dpdk-hugepages.py -p 1G --setup 4G
>>
>> dpdk-hugepages.py -s
>> Node Pages Size Total
>> 0    4     1Gb    4Gb
>> 1    4     1Gb    4G
>>
>>
>> So I got my 4 pages of 1GB and retried the previous steps :
>>
>> cat /proc/meminfo | grep Huge
>> AnonHugePages:     10240 kB
>> ShmemHugePages:        0 kB
>> FileHugePages:         0 kB
>> HugePages_Total:       8
>> HugePages_Free:        8
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:    1048576 kB
>> Hugetlb:         8388608 kB
>>
>> Here it says I got 8 hugepages of 1GB so I don't understand why, because
>> the python script tells the opposite.
>>
>> dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: No available hugepages reported in hugepages-2048kB
>> EAL: Probing VFIO support...
>> testpmd: No probed ethernet devices
>> Interactive-mode selected
>> testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc
>> Done
>>
>> Same for the helloworld.
>>
>> Then I cleared my environment:
>>
>> dpdk-hugepages.py -u && dpdk-hugepages.py -c && dpdk-hugepages.py -s
>> Node Pages Size Total
>>
>> Hugepages not mounted
>>
>>
>> Then as the error says that there are no available hugepages reported in
>> hugepages-2048kB i tried with Mb :
>> dpdk-hugepages.py -p 1024M --setup 4G && dpdk-hugepages.py -s
>> Node Pages Size Total
>> 0    4     1Gb    4Gb
>> 1    4     1Gb    4Gb
>>
>> But same error happened.
>>
>>
>> 4 Question
>>
>> So I don't succeed to resolve this issue of testing the DPDK with
>> helloworld and dpdk-testpmd.
>>
>> Have I missed something in the creation of the hugepages ?
>>
>> Could you please, provide help ?
>>
>> Best,
>>
>>
>>
>
>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-06-03 16:11 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-31 15:35 [dpdk-users] Unable to setup hugepages Gabriel Danjon
2021-06-01  7:58 ` Thomas Monjalon
2021-06-02 15:35   ` Gabriel Danjon

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ https://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git