DPDK usage discussions
 help / color / mirror / Atom feed
* No free hugepages reported
@ 2024-03-31 10:58 Lokesh Chakka
  2024-03-31 19:06 ` Stephen Hemminger
  0 siblings, 1 reply; 4+ messages in thread
From: Lokesh Chakka @ 2024-03-31 10:58 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 4009 bytes --]

Hello,

I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y install
dpdk*"

added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
added "vm.nr_hugepages=1024" in /etc/sysctl.conf

rebooted the machine and then did devbind using the following command:

sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
63:00.1

Huge page info is as follows :

*************************************************
$ cat /proc/meminfo | grep Huge
AnonHugePages:      6144 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:    1024
HugePages_Free:     1023
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:         2097152 kB
*************************************************

output of "dpdk-devbind.py -s" is as follows :

*************************************************

Network devices using DPDK-compatible driver
============================================
0000:63:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
unused=ice
0000:63:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
unused=ice

*************************************************

I am seeing the following error while I try to run dpdk-test

*************************************************
$ sudo dpdk-test
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 8
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free 2048 kB hugepages reported on node 0
EAL: No free 2048 kB hugepages reported on node 1
EAL: No free 2048 kB hugepages reported on node 3
EAL: No free 2048 kB hugepages reported on node 4
EAL: No free 2048 kB hugepages reported on node 5
EAL: No free 2048 kB hugepages reported on node 7
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No free 1048576 kB hugepages reported on node 1
EAL: No free 1048576 kB hugepages reported on node 2
EAL: No free 1048576 kB hugepages reported on node 3
EAL: No free 1048576 kB hugepages reported on node 4
EAL: No free 1048576 kB hugepages reported on node 5
EAL: No free 1048576 kB hugepages reported on node 6
EAL: No free 1048576 kB hugepages reported on node 7
EAL: VFIO support initialized
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Using IOMMU type 1 (Type 1)
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Probe PCI driver: net_ice (8086:1592) device: 0000:63:00.0 (socket 3)
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
PANIC in eth_dev_shared_data_prepare():
Cannot allocate ethdev shared data
0: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_dump_stack+0x41)
[788e7385d0b1]
1: /lib/x86_64-linux-gnu/librte_eal.so.23 (__rte_panic+0xc1) [788e7383e1c7]
2: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (788e736f5000+0x8b16)
[788e736fdb16]
3: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (rte_eth_dev_allocate+0x31)
[788e73709971]
4: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_net_ice.so.23.0
(788e705d1000+0x67465) [788e70638465]
5: /lib/x86_64-linux-gnu/librte_bus_pci.so.23 (788e72fcc000+0x4c76)
[788e72fd0c76]
6: /lib/x86_64-linux-gnu/librte_bus_pci.so.23 (788e72fcc000+0x8af4)
[788e72fd4af4]
7: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_bus_probe+0x23)
[788e7384bab3]
8: /lib/x86_64-linux-gnu/librte_eal.so.23 (788e73831000+0x123bf)
[788e738433bf]
9: dpdk-test (59eca0915000+0x6c9e7) [59eca09819e7]
10: /lib/x86_64-linux-gnu/libc.so.6 (788e72c00000+0x28150) [788e72c28150]
11: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0x89) [788e72c28209]
12: dpdk-test (59eca0915000+0x6ee85) [59eca0983e85]
Aborted
*************************************************

Can someone help me identify the issue please....


Thanks & Regards
--
Lokesh Chakka.

[-- Attachment #2: Type: text/html, Size: 5038 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: No free hugepages reported
  2024-03-31 10:58 No free hugepages reported Lokesh Chakka
@ 2024-03-31 19:06 ` Stephen Hemminger
  2024-03-31 21:20   ` Lokesh Chakka
  0 siblings, 1 reply; 4+ messages in thread
From: Stephen Hemminger @ 2024-03-31 19:06 UTC (permalink / raw)
  To: Lokesh Chakka; +Cc: users

On Sun, 31 Mar 2024 16:28:19 +0530
Lokesh Chakka <lvenkatakumarchakka@gmail.com> wrote:

> Hello,
> 
> I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y install
> dpdk*"
> 
> added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
> added "vm.nr_hugepages=1024" in /etc/sysctl.conf
> 
> rebooted the machine and then did devbind using the following command:
> 
> sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
> 63:00.1
> 
> Huge page info is as follows :
> 
> *************************************************
> $ cat /proc/meminfo | grep Huge
> AnonHugePages:      6144 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:    1024
> HugePages_Free:     1023
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> Hugetlb:         2097152 kB
> *************************************************

Your hugepages are not setup correctly. The mount is for 1G pages
and the sysctl entry makes 2M pages.

Did you try using the dpdk-hugepages script?

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: No free hugepages reported
  2024-03-31 19:06 ` Stephen Hemminger
@ 2024-03-31 21:20   ` Lokesh Chakka
  2024-04-02  7:14     ` Lokesh Chakka
  0 siblings, 1 reply; 4+ messages in thread
From: Lokesh Chakka @ 2024-03-31 21:20 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 1786 bytes --]

hi Stephen,

Thanks for the reply. Following is the observation...

*************************************************************
$ dpdk-hugepages.py -s
Node Pages Size Total
2    512   2Mb    1Gb
6    512   2Mb    1Gb

Hugepages mounted on /dev/hugepages /mnt/huge

$ sudo dpdk-hugepages.py -p 1G --setup 2G
Unable to set pages (0 instead of 2 in
/sys/devices/system/node/node4/hugepages/hugepages-1048576kB/nr_hugepages).
*************************************************************


Regards
--
Lokesh Chakka.


On Mon, Apr 1, 2024 at 12:36 AM Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Sun, 31 Mar 2024 16:28:19 +0530
> Lokesh Chakka <lvenkatakumarchakka@gmail.com> wrote:
>
> > Hello,
> >
> > I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y install
> > dpdk*"
> >
> > added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
> > added "vm.nr_hugepages=1024" in /etc/sysctl.conf
> >
> > rebooted the machine and then did devbind using the following command:
> >
> > sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
> > 63:00.1
> >
> > Huge page info is as follows :
> >
> > *************************************************
> > $ cat /proc/meminfo | grep Huge
> > AnonHugePages:      6144 kB
> > ShmemHugePages:        0 kB
> > FileHugePages:         0 kB
> > HugePages_Total:    1024
> > HugePages_Free:     1023
> > HugePages_Rsvd:        0
> > HugePages_Surp:        0
> > Hugepagesize:       2048 kB
> > Hugetlb:         2097152 kB
> > *************************************************
>
> Your hugepages are not setup correctly. The mount is for 1G pages
> and the sysctl entry makes 2M pages.
>
> Did you try using the dpdk-hugepages script?
>

[-- Attachment #2: Type: text/html, Size: 2758 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: No free hugepages reported
  2024-03-31 21:20   ` Lokesh Chakka
@ 2024-04-02  7:14     ` Lokesh Chakka
  0 siblings, 0 replies; 4+ messages in thread
From: Lokesh Chakka @ 2024-04-02  7:14 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 4858 bytes --]

hi,

To add more information, the server I'm using has two CPU sockets and two
NUMA nodes. One is numbered as Node 2 and other one as Node 6
one more observation is the following command is getting executed
successfully.
$ sudo dpdk-hugepages.py -p 1G --setup 2G -n 2
also the following one
$ sudo dpdk-hugepages.py -p 1G --setup 2G -n 6
After executing the first command, 1G huge pages are getting created. After
executing the second command, huge pages under node 2 are getting deleted.

Following is the output of dpdk-testpmd command

****************************************************************************************************
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 8
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No free 1048576 kB hugepages reported on node 1
EAL: No free 1048576 kB hugepages reported on node 3
EAL: No free 1048576 kB hugepages reported on node 4
EAL: No free 1048576 kB hugepages reported on node 5
EAL: No free 1048576 kB hugepages reported on node 6
EAL: No free 1048576 kB hugepages reported on node 7
EAL: VFIO support initialized
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Using IOMMU type 1 (Type 1)
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Probe PCI driver: net_ice (8086:1592) device: 0000:63:00.0 (socket 3)
set_mempolicy: Invalid argument
PANIC in eth_dev_shared_data_prepare():
Cannot allocate ethdev shared data
0: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_dump_stack+0x41)
[7dbb0fe000b1]
1: /lib/x86_64-linux-gnu/librte_eal.so.23 (__rte_panic+0xc1) [7dbb0fde11c7]
2: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (7dbb0fedb000+0x8b16)
[7dbb0fee3b16]
3: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (rte_eth_dev_allocate+0x31)
[7dbb0feef971]
4: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_net_ice.so.23
(7dbb0f70e000+0x67465) [7dbb0f775465]
5: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_bus_pci.so.23
(7dbb0fc64000+0x4c76) [7dbb0fc68c76]
6: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_bus_pci.so.23
(7dbb0fc64000+0x8af4) [7dbb0fc6caf4]
7: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_bus_probe+0x23)
[7dbb0fdeeab3]
8: /lib/x86_64-linux-gnu/librte_eal.so.23 (7dbb0fdd4000+0x123bf)
[7dbb0fde63bf]
9: dpdk-testpmd (5813e0022000+0x45150) [5813e0067150]
10: /lib/x86_64-linux-gnu/libc.so.6 (7dbb0ec00000+0x28150) [7dbb0ec28150]
11: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0x89) [7dbb0ec28209]
12: dpdk-testpmd (5813e0022000+0x48e55) [5813e006ae55]
Aborted

****************************************************************************************************



Thanks & Regards
--
Lokesh Chakka.


On Mon, Apr 1, 2024 at 2:50 AM Lokesh Chakka <lvenkatakumarchakka@gmail.com>
wrote:

> hi Stephen,
>
> Thanks for the reply. Following is the observation...
>
> *************************************************************
> $ dpdk-hugepages.py -s
> Node Pages Size Total
> 2    512   2Mb    1Gb
> 6    512   2Mb    1Gb
>
> Hugepages mounted on /dev/hugepages /mnt/huge
>
> $ sudo dpdk-hugepages.py -p 1G --setup 2G
> Unable to set pages (0 instead of 2 in
> /sys/devices/system/node/node4/hugepages/hugepages-1048576kB/nr_hugepages).
> *************************************************************
>
>
> Regards
> --
> Lokesh Chakka.
>
>
> On Mon, Apr 1, 2024 at 12:36 AM Stephen Hemminger <
> stephen@networkplumber.org> wrote:
>
>> On Sun, 31 Mar 2024 16:28:19 +0530
>> Lokesh Chakka <lvenkatakumarchakka@gmail.com> wrote:
>>
>> > Hello,
>> >
>> > I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y
>> install
>> > dpdk*"
>> >
>> > added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
>> > added "vm.nr_hugepages=1024" in /etc/sysctl.conf
>> >
>> > rebooted the machine and then did devbind using the following command:
>> >
>> > sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
>> > 63:00.1
>> >
>> > Huge page info is as follows :
>> >
>> > *************************************************
>> > $ cat /proc/meminfo | grep Huge
>> > AnonHugePages:      6144 kB
>> > ShmemHugePages:        0 kB
>> > FileHugePages:         0 kB
>> > HugePages_Total:    1024
>> > HugePages_Free:     1023
>> > HugePages_Rsvd:        0
>> > HugePages_Surp:        0
>> > Hugepagesize:       2048 kB
>> > Hugetlb:         2097152 kB
>> > *************************************************
>>
>> Your hugepages are not setup correctly. The mount is for 1G pages
>> and the sysctl entry makes 2M pages.
>>
>> Did you try using the dpdk-hugepages script?
>>
>

[-- Attachment #2: Type: text/html, Size: 7023 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-04-02  7:15 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-31 10:58 No free hugepages reported Lokesh Chakka
2024-03-31 19:06 ` Stephen Hemminger
2024-03-31 21:20   ` Lokesh Chakka
2024-04-02  7:14     ` Lokesh Chakka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).