From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F372A0A0C for ; Thu, 3 Jun 2021 18:11:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AC744410F1; Thu, 3 Jun 2021 18:11:23 +0200 (CEST) Received: from mail03.singteldns.com (mail03.singteldns.com [101.100.192.41]) by mails.dpdk.org (Postfix) with ESMTP id 722C240689 for ; Wed, 2 Jun 2021 17:35:24 +0200 (CEST) Received: (helo=[10.201.1.243]) by mail03.singteldns.com with esmtpsa (TLS1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.94.2) (envelope-from ) id 1loStu-0005Js-3L; Wed, 02 Jun 2021 23:35:22 +0800 To: Thomas Monjalon Cc: users@dpdk.org, Alexis DANJON , Antoine LORIN , Laurent CHABENET , "gregory.fresnais@cybertestsystems.com.sg" , Julien RAMET References: <48a96b06-3ffd-97df-5176-97096a0758bc@cybertestsystems.com.sg> <17638494.Raf4tdhBNn@thomas> From: Gabriel Danjon Message-ID: <254a6116-85ca-3ba4-0032-0d8cdfd0e43e@cybertestsystems.com.sg> Date: Wed, 2 Jun 2021 17:35:14 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.0 MIME-Version: 1.0 In-Reply-To: <17638494.Raf4tdhBNn@thomas> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - mail03.singteldns.com X-AntiAbuse: Original Domain - dpdk.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - cybertestsystems.com.sg X-Get-Message-Sender-Via: mail03.singteldns.com: authenticated_id: gabriel.danjon@cybertestsystems.com.sg X-Authenticated-Sender: mail03.singteldns.com: gabriel.danjon@cybertestsystems.com.sg X-Source: X-Source-Args: X-Source-Dir: X-Mailman-Approved-At: Thu, 03 Jun 2021 18:11:22 +0200 Subject: Re: [dpdk-users] Unable to setup hugepages X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hello, After looking at the hugepage_info_init function from dpdk-20.11/lib/librte_eal/linux/eal_hugepage_info.c we finally understood why we could skip the warning. Thanks to your help, we manage to generate traffic using the testpmd. Gabriel Danjon Cyber Test Systems On 6/1/21 9:58 AM, Thomas Monjalon wrote: > 31/05/2021 17:35, Gabriel Danjon: >> Hello, >> >> After successfully installed the DPDK 20.11 on my Centos 8-Stream >> (minimal), I am trying to configure the hugepages but encounters a lot >> of difficulties. > There's some confusing info below. > Let's forget all the details and focus on simple things: > 1/ use dpdk-hugepages.py > 2/ choose one page size (2M or 1G) > 3/ check which node requires memory with lstopo > 4/ don't be confused with warnings about unused page size > > > >> I am trying to reserve 4 hugepages of 1GB. >> >> >> Here the steps I have done following the documentation >> (https://doc.dpdk.org/guides-20.11/linux_gsg/sys_reqs.html): >> >> Additional information about meminfo : >> >> cat /proc/meminfo >> MemTotal: 32619404 kB >> MemFree: 27331024 kB >> MemAvailable: 27415524 kB >> Buffers: 4220 kB >> Cached: 328628 kB >> SwapCached: 0 kB >> Active: 194828 kB >> Inactive: 210156 kB >> Active(anon): 1744 kB >> Inactive(anon): 83384 kB >> Active(file): 193084 kB >> Inactive(file): 126772 kB >> Unevictable: 0 kB >> Mlocked: 0 kB >> SwapTotal: 16474108 kB >> SwapFree: 16474108 kB >> Dirty: 0 kB >> Writeback: 0 kB >> AnonPages: 72136 kB >> Mapped: 84016 kB >> Shmem: 12992 kB >> KReclaimable: 211956 kB >> Slab: 372852 kB >> SReclaimable: 211956 kB >> SUnreclaim: 160896 kB >> KernelStack: 9120 kB >> PageTables: 6852 kB >> NFS_Unstable: 0 kB >> Bounce: 0 kB >> WritebackTmp: 0 kB >> CommitLimit: 30686656 kB >> Committed_AS: 270424 kB >> VmallocTotal: 34359738367 kB >> VmallocUsed: 0 kB >> VmallocChunk: 0 kB >> Percpu: 28416 kB >> HardwareCorrupted: 0 kB >> AnonHugePages: 10240 kB >> ShmemHugePages: 0 kB >> ShmemPmdMapped: 0 kB >> FileHugePages: 0 kB >> FilePmdMapped: 0 kB >> HugePages_Total: 0 >> HugePages_Free: 0 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> Hugepagesize: 1048576 kB >> Hugetlb: 4194304 kB >> DirectMap4k: 225272 kB >> DirectMap2M: 4919296 kB >> DirectMap1G: 30408704 kB >> >> 1 Step follow documentation >> >> bash -c 'echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages' >> >> As we're working on a NUMA machine we do this too. (We even do the >> previous step because without it, it provides more errors) >> >> bash -c 'echo 2048 > >> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \ >> bash -c 'echo 2048 > >> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages' >> >> mkdir /mnt/huge >> mount -t hugetlbfs pagesize=1GB /mnt/huge >> >> bash -c 'echo nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 >> /etc/fstab' >> >> So here the result of my meminfo (cat /proc/meminfo | grep Huge) : >> >> AnonHugePages: 10240 kB >> ShmemHugePages: 0 kB >> FileHugePages: 0 kB >> HugePages_Total: 0 >> HugePages_Free: 0 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> Hugepagesize: 1048576 kB >> Hugetlb: 4194304 kB >> >> It looks strange that there is no total and free hugepages. >> >> I tried the dpdk-testpmd using the DPDK documentation : dpdk-testpmd -l >> 0-3 -n 4 -- -i --nb-cores=2 >> >> EAL: Detected 48 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Detected static linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs >> found for that size >> EAL: No free hugepages reported in hugepages-1048576kB >> EAL: No free hugepages reported in hugepages-1048576kB >> EAL: No available hugepages reported in hugepages-1048576kB >> EAL: FATAL: Cannot get hugepage information. >> EAL: Cannot get hugepage information. >> EAL: Error - exiting with code: 1 >> Cause: Cannot init EAL: Permission denied >> >> >> So I checked in the /mnt/huge to look if files had been created (ls >> /mnt/huge/ -la) : Empty folder >> >> Then I checked if my folder was correctly mounted : mount | grep huge >> pagesize=1GB on /mnt/huge type hugetlbfs >> (rw,relatime,seclabel,pagesize=1024M) >> >> Then I tried the helloworld example (make clean && make && >> ./build/helloworld): >> >> EAL: Detected 48 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Detected shared linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs >> found for that size >> EAL: No free 1048576 kB hugepages reported on node 0 >> EAL: No free 1048576 kB hugepages reported on node 1 >> EAL: No available 1048576 kB hugepages reported >> EAL: FATAL: Cannot get hugepage information. >> EAL: Cannot get hugepage information. >> PANIC in main(): >> Cannot init EAL >> 5: [./build/helloworld() [0x40079e]] >> 4: [/lib64/libc.so.6(__libc_start_main+0xf3) [0x7ff43a6f6493]] >> 3: [./build/helloworld() [0x4006e6]] >> 2: [/usr/local/lib64/librte_eal.so.21(__rte_panic+0xba) [0x7ff43aaa4b93]] >> 1: [/usr/local/lib64/librte_eal.so.21(rte_dump_stack+0x1b) [0x7ff43aac79fb]] >> Aborted (core dumped) >> >> >> So I guessed the problem came from the : Hugepagesize: 1048576 kB >> (from cat /proc/meminfo | grep Huge). >> >> >> 2 Step adapt documentation >> >> Then I decided to set the values for 1048576KB: >> >> bash -c 'echo 4 > >> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages' >> bash -c 'echo 4 > >> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages' >> bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages' >> >> >> So here the result of my meminfo (cat /proc/meminfo | grep Huge) : >> >> AnonHugePages: 10240 kB >> ShmemHugePages: 0 kB >> FileHugePages: 0 kB >> HugePages_Total: 4 >> HugePages_Free: 4 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> Hugepagesize: 1048576 kB >> Hugetlb: 8388608 kB >> >> So here I have my 4 pages sat. >> >> Then I retried the previous steps and here what I got : >> >> dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2 >> EAL: Detected 48 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Detected static linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs >> found for that size >> EAL: Probing VFIO support... >> testpmd: No probed ethernet devices >> Interactive-mode selected >> testpmd: create a new mbuf pool : n=171456, size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Done >> testpmd> >> Bye... >> >> >> make clean && make && ./build/helloworld >> EAL: Detected 48 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Detected shared linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs >> found for that size >> TELEMETRY: No legacy callbacks, legacy socket not created >> >> >> cat /proc/meminfo | grep Huge >> AnonHugePages: 10240 kB >> ShmemHugePages: 0 kB >> FileHugePages: 0 kB >> HugePages_Total: 4 >> HugePages_Free: 3 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> Hugepagesize: 1048576 kB >> Hugetlb: 8388608 kB >> >> One huge page looks like have been used. >> ls -l /mnt/huge/ >> total 1048576 >> 1073741824 rtemap_0 >> >> So yes one has been created, but 2048 hugepages of size 2097152 >> reserved, but no mounted hugetlbfs found for that size, happens. >> >> So to try to understand what happens I reset >> hugepages-2048kB/nr_hugepages to 0 : >> bash -c 'echo 0 > >> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \ >> bash -c 'echo 0 > >> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages' && \ >> bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages' >> >> but : dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Detected static linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: No available hugepages reported in hugepages-2048kB >> EAL: Probing VFIO support... >> testpmd: No probed ethernet devices >> Interactive-mode selected >> testpmd: create a new mbuf pool : n=171456, size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Done >> >> So here on this part I really don't understand, my /proc/meminfo tells >> me I need to use the 1048576kB part but dpdk-testpmd the 2048kB. >> >> Then I searched an alternative to these commands in your documentation >> and found : dpdk-hugpages.py >> >> >> 3 Step alternative >> >> https://doc.dpdk.org/guides-20.11/tools/hugepages.html >> (There is an error on the documentation : dpdk-hugpages instead of >> dpdk-hugepages) >> >> So I reset every files, and removed my mount and my folder. >> >> umount /mnt/huge >> rm -rf /mnt/huge >> bash -c 'echo 0 > >> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages' >> bash -c 'echo 0 > >> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages' >> bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages' >> >> cat /proc/meminfo | grep Huge >> AnonHugePages: 10240 kB >> ShmemHugePages: 0 kB >> FileHugePages: 0 kB >> HugePages_Total: 0 >> HugePages_Free: 0 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> Hugepagesize: 1048576 kB >> Hugetlb: 0 kB >> >> >> dpdk-hugpages.py -s >> >> Node Pages Size Total >> >> Hugepages not mounted >> >> >> So here I have an cleaned hugepage environment. >> Then I tried to reallocate hugepages with the python script: >> dpdk-hugepages.py -p 1G --setup 4G >> >> dpdk-hugepages.py -s >> Node Pages Size Total >> 0 4 1Gb 4Gb >> 1 4 1Gb 4G >> >> >> So I got my 4 pages of 1GB and retried the previous steps : >> >> cat /proc/meminfo | grep Huge >> AnonHugePages: 10240 kB >> ShmemHugePages: 0 kB >> FileHugePages: 0 kB >> HugePages_Total: 8 >> HugePages_Free: 8 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> Hugepagesize: 1048576 kB >> Hugetlb: 8388608 kB >> >> Here it says I got 8 hugepages of 1GB so I don't understand why, because >> the python script tells the opposite. >> >> dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Detected static linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: No available hugepages reported in hugepages-2048kB >> EAL: Probing VFIO support... >> testpmd: No probed ethernet devices >> Interactive-mode selected >> testpmd: create a new mbuf pool : n=171456, size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Done >> >> Same for the helloworld. >> >> Then I cleared my environment: >> >> dpdk-hugepages.py -u && dpdk-hugepages.py -c && dpdk-hugepages.py -s >> Node Pages Size Total >> >> Hugepages not mounted >> >> >> Then as the error says that there are no available hugepages reported in >> hugepages-2048kB i tried with Mb : >> dpdk-hugepages.py -p 1024M --setup 4G && dpdk-hugepages.py -s >> Node Pages Size Total >> 0 4 1Gb 4Gb >> 1 4 1Gb 4Gb >> >> But same error happened. >> >> >> 4 Question >> >> So I don't succeed to resolve this issue of testing the DPDK with >> helloworld and dpdk-testpmd. >> >> Have I missed something in the creation of the hugepages ? >> >> Could you please, provide help ? >> >> Best, >> >> >> > > >