From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E0DDA0524 for ; Tue, 1 Jun 2021 09:58:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1204840041; Tue, 1 Jun 2021 09:58:14 +0200 (CEST) Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com [64.147.123.20]) by mails.dpdk.org (Postfix) with ESMTP id 24E7140040 for ; Tue, 1 Jun 2021 09:58:12 +0200 (CEST) Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailout.west.internal (Postfix) with ESMTP id 9787010C9; Tue, 1 Jun 2021 03:58:10 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Tue, 01 Jun 2021 03:58:11 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm1; bh= 1XkMbsQ16ZoTkuL4CegFybNSMnJD/Wtjskoq4ocKgI0=; b=JV8ng/eJpMwaDMKO f+Kw1AsM1K2l5GM3St+KIBygQas6WEeT9VXyyptSMF6sjTnpWTNtjFrPupwT2+XB KJameBxtbXTO2UHcTXixjRF6Bm2DK7AoPBgzgSCN0NZxD09CMVerGWrbxAYi5F53 rNk3GEmaQKJjl7zR3nSQzj/+N5rAbfzqHYY2FNThIjnSbEwnqJ3c0Fm/AbTKL8oy 0/pgP7/MPsOCPDYEHppMkim7QbiH8tp2YEuhCJ3okSAyaMyXZFrHZeLTXHCoqUAw dxJUPa5v1Y+KIkeUkdSMcCp8bF23ZvRzJ5Z1F0UZlSemisWHE0GZWcIhruGYPW5G gObPXA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=1XkMbsQ16ZoTkuL4CegFybNSMnJD/Wtjskoq4ocKg I0=; b=iPvAdDT39NOQEU0Dg2gQ5POKSeaNCu9FGcan8iZHOGGZrMX7Un0hWupQd q53qTMLgp69Utb1dTc7ODjyH99C1Sp24yb2YvM79yLhmbAE8y/5bt45iyjet+192 c8zjYVkgMyQStjDpsWShK1H4kSn6lh+3EMz6PQ6rxln7HUr/DYPivAXS31jmys8a zX+BeX7cY0+fq3D/57XyHr6E8hBmZdo60jjhKQ5Sa0Ica4etX3AAfElTmplVqTFD /eB5+x0GCIwZRfW14WceBXn17V5+alNOsyFWyunbHF/6KeMls9UIH/xPJ2hzzas0 piJrKlnD7EqwoZeZxo8a6m/YMNuJw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdelgedguddvjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpeffvdffjeeuteelfeeileduudeugfetjeelveefkeejfeeigeeh teffvdekfeegudenucffohhmrghinhepughpughkrdhorhhgnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhho nhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 1 Jun 2021 03:58:09 -0400 (EDT) From: Thomas Monjalon To: Gabriel Danjon Cc: users@dpdk.org, Alexis DANJON , Antoine LORIN , Laurent CHABENET , "gregory.fresnais@cybertestsystems.com.sg" , Julien RAMET Date: Tue, 01 Jun 2021 09:58:07 +0200 Message-ID: <17638494.Raf4tdhBNn@thomas> In-Reply-To: <48a96b06-3ffd-97df-5176-97096a0758bc@cybertestsystems.com.sg> References: <48a96b06-3ffd-97df-5176-97096a0758bc@cybertestsystems.com.sg> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-users] Unable to setup hugepages X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" 31/05/2021 17:35, Gabriel Danjon: > Hello, > > After successfully installed the DPDK 20.11 on my Centos 8-Stream > (minimal), I am trying to configure the hugepages but encounters a lot > of difficulties. There's some confusing info below. Let's forget all the details and focus on simple things: 1/ use dpdk-hugepages.py 2/ choose one page size (2M or 1G) 3/ check which node requires memory with lstopo 4/ don't be confused with warnings about unused page size > I am trying to reserve 4 hugepages of 1GB. > > > Here the steps I have done following the documentation > (https://doc.dpdk.org/guides-20.11/linux_gsg/sys_reqs.html): > > Additional information about meminfo : > > cat /proc/meminfo > MemTotal: 32619404 kB > MemFree: 27331024 kB > MemAvailable: 27415524 kB > Buffers: 4220 kB > Cached: 328628 kB > SwapCached: 0 kB > Active: 194828 kB > Inactive: 210156 kB > Active(anon): 1744 kB > Inactive(anon): 83384 kB > Active(file): 193084 kB > Inactive(file): 126772 kB > Unevictable: 0 kB > Mlocked: 0 kB > SwapTotal: 16474108 kB > SwapFree: 16474108 kB > Dirty: 0 kB > Writeback: 0 kB > AnonPages: 72136 kB > Mapped: 84016 kB > Shmem: 12992 kB > KReclaimable: 211956 kB > Slab: 372852 kB > SReclaimable: 211956 kB > SUnreclaim: 160896 kB > KernelStack: 9120 kB > PageTables: 6852 kB > NFS_Unstable: 0 kB > Bounce: 0 kB > WritebackTmp: 0 kB > CommitLimit: 30686656 kB > Committed_AS: 270424 kB > VmallocTotal: 34359738367 kB > VmallocUsed: 0 kB > VmallocChunk: 0 kB > Percpu: 28416 kB > HardwareCorrupted: 0 kB > AnonHugePages: 10240 kB > ShmemHugePages: 0 kB > ShmemPmdMapped: 0 kB > FileHugePages: 0 kB > FilePmdMapped: 0 kB > HugePages_Total: 0 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > Hugetlb: 4194304 kB > DirectMap4k: 225272 kB > DirectMap2M: 4919296 kB > DirectMap1G: 30408704 kB > > 1 Step follow documentation > > bash -c 'echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages' > > As we're working on a NUMA machine we do this too. (We even do the > previous step because without it, it provides more errors) > > bash -c 'echo 2048 > > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \ > bash -c 'echo 2048 > > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages' > > mkdir /mnt/huge > mount -t hugetlbfs pagesize=1GB /mnt/huge > > bash -c 'echo nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 >> /etc/fstab' > > So here the result of my meminfo (cat /proc/meminfo | grep Huge) : > > AnonHugePages: 10240 kB > ShmemHugePages: 0 kB > FileHugePages: 0 kB > HugePages_Total: 0 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > Hugetlb: 4194304 kB > > It looks strange that there is no total and free hugepages. > > I tried the dpdk-testpmd using the DPDK documentation : dpdk-testpmd -l > 0-3 -n 4 -- -i --nb-cores=2 > > EAL: Detected 48 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Detected static linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size > EAL: No free hugepages reported in hugepages-1048576kB > EAL: No free hugepages reported in hugepages-1048576kB > EAL: No available hugepages reported in hugepages-1048576kB > EAL: FATAL: Cannot get hugepage information. > EAL: Cannot get hugepage information. > EAL: Error - exiting with code: 1 > Cause: Cannot init EAL: Permission denied > > > So I checked in the /mnt/huge to look if files had been created (ls > /mnt/huge/ -la) : Empty folder > > Then I checked if my folder was correctly mounted : mount | grep huge > pagesize=1GB on /mnt/huge type hugetlbfs > (rw,relatime,seclabel,pagesize=1024M) > > Then I tried the helloworld example (make clean && make && > ./build/helloworld): > > EAL: Detected 48 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Detected shared linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size > EAL: No free 1048576 kB hugepages reported on node 0 > EAL: No free 1048576 kB hugepages reported on node 1 > EAL: No available 1048576 kB hugepages reported > EAL: FATAL: Cannot get hugepage information. > EAL: Cannot get hugepage information. > PANIC in main(): > Cannot init EAL > 5: [./build/helloworld() [0x40079e]] > 4: [/lib64/libc.so.6(__libc_start_main+0xf3) [0x7ff43a6f6493]] > 3: [./build/helloworld() [0x4006e6]] > 2: [/usr/local/lib64/librte_eal.so.21(__rte_panic+0xba) [0x7ff43aaa4b93]] > 1: [/usr/local/lib64/librte_eal.so.21(rte_dump_stack+0x1b) [0x7ff43aac79fb]] > Aborted (core dumped) > > > So I guessed the problem came from the : Hugepagesize: 1048576 kB > (from cat /proc/meminfo | grep Huge). > > > 2 Step adapt documentation > > Then I decided to set the values for 1048576KB: > > bash -c 'echo 4 > > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages' > bash -c 'echo 4 > > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages' > bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages' > > > So here the result of my meminfo (cat /proc/meminfo | grep Huge) : > > AnonHugePages: 10240 kB > ShmemHugePages: 0 kB > FileHugePages: 0 kB > HugePages_Total: 4 > HugePages_Free: 4 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > Hugetlb: 8388608 kB > > So here I have my 4 pages sat. > > Then I retried the previous steps and here what I got : > > dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2 > EAL: Detected 48 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Detected static linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size > EAL: Probing VFIO support... > testpmd: No probed ethernet devices > Interactive-mode selected > testpmd: create a new mbuf pool : n=171456, size=2176, socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > Done > testpmd> > Bye... > > > make clean && make && ./build/helloworld > EAL: Detected 48 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Detected shared linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size > TELEMETRY: No legacy callbacks, legacy socket not created > > > cat /proc/meminfo | grep Huge > AnonHugePages: 10240 kB > ShmemHugePages: 0 kB > FileHugePages: 0 kB > HugePages_Total: 4 > HugePages_Free: 3 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > Hugetlb: 8388608 kB > > One huge page looks like have been used. > ls -l /mnt/huge/ > total 1048576 > 1073741824 rtemap_0 > > So yes one has been created, but 2048 hugepages of size 2097152 > reserved, but no mounted hugetlbfs found for that size, happens. > > So to try to understand what happens I reset > hugepages-2048kB/nr_hugepages to 0 : > bash -c 'echo 0 > > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' && \ > bash -c 'echo 0 > > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages' && \ > bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages' > > but : dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Detected static linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: No available hugepages reported in hugepages-2048kB > EAL: Probing VFIO support... > testpmd: No probed ethernet devices > Interactive-mode selected > testpmd: create a new mbuf pool : n=171456, size=2176, socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > Done > > So here on this part I really don't understand, my /proc/meminfo tells > me I need to use the 1048576kB part but dpdk-testpmd the 2048kB. > > Then I searched an alternative to these commands in your documentation > and found : dpdk-hugpages.py > > > 3 Step alternative > > https://doc.dpdk.org/guides-20.11/tools/hugepages.html > (There is an error on the documentation : dpdk-hugpages instead of > dpdk-hugepages) > > So I reset every files, and removed my mount and my folder. > > umount /mnt/huge > rm -rf /mnt/huge > bash -c 'echo 0 > > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages' > bash -c 'echo 0 > > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages' > bash -c 'echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages' > > cat /proc/meminfo | grep Huge > AnonHugePages: 10240 kB > ShmemHugePages: 0 kB > FileHugePages: 0 kB > HugePages_Total: 0 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > Hugetlb: 0 kB > > > dpdk-hugpages.py -s > > Node Pages Size Total > > Hugepages not mounted > > > So here I have an cleaned hugepage environment. > Then I tried to reallocate hugepages with the python script: > dpdk-hugepages.py -p 1G --setup 4G > > dpdk-hugepages.py -s > Node Pages Size Total > 0 4 1Gb 4Gb > 1 4 1Gb 4G > > > So I got my 4 pages of 1GB and retried the previous steps : > > cat /proc/meminfo | grep Huge > AnonHugePages: 10240 kB > ShmemHugePages: 0 kB > FileHugePages: 0 kB > HugePages_Total: 8 > HugePages_Free: 8 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > Hugetlb: 8388608 kB > > Here it says I got 8 hugepages of 1GB so I don't understand why, because > the python script tells the opposite. > > dpdk-testpmd -l 0-3 -n 4 -- -i --nb-cores=2EAL: Detected 48 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Detected static linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: No available hugepages reported in hugepages-2048kB > EAL: Probing VFIO support... > testpmd: No probed ethernet devices > Interactive-mode selected > testpmd: create a new mbuf pool : n=171456, size=2176, socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > Done > > Same for the helloworld. > > Then I cleared my environment: > > dpdk-hugepages.py -u && dpdk-hugepages.py -c && dpdk-hugepages.py -s > Node Pages Size Total > > Hugepages not mounted > > > Then as the error says that there are no available hugepages reported in > hugepages-2048kB i tried with Mb : > dpdk-hugepages.py -p 1024M --setup 4G && dpdk-hugepages.py -s > Node Pages Size Total > 0 4 1Gb 4Gb > 1 4 1Gb 4Gb > > But same error happened. > > > 4 Question > > So I don't succeed to resolve this issue of testing the DPDK with > helloworld and dpdk-testpmd. > > Have I missed something in the creation of the hugepages ? > > Could you please, provide help ? > > Best, > > >