hi,

To add more information, the server I'm using has two CPU sockets and two NUMA nodes. One is numbered as Node 2 and other one as Node 6
one more observation is the following command is getting executed successfully.
$ sudo dpdk-hugepages.py -p 1G --setup 2G -n 2
also the following one
$ sudo dpdk-hugepages.py -p 1G --setup 2G -n 6
After executing the first command, 1G huge pages are getting created. After executing the second command, huge pages under node 2 are getting deleted.

Following is the output of dpdk-testpmd command

****************************************************************************************************
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 8
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No free 1048576 kB hugepages reported on node 1
EAL: No free 1048576 kB hugepages reported on node 3
EAL: No free 1048576 kB hugepages reported on node 4
EAL: No free 1048576 kB hugepages reported on node 5
EAL: No free 1048576 kB hugepages reported on node 6
EAL: No free 1048576 kB hugepages reported on node 7
EAL: VFIO support initialized
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Using IOMMU type 1 (Type 1)
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Probe PCI driver: net_ice (8086:1592) device: 0000:63:00.0 (socket 3)
set_mempolicy: Invalid argument
PANIC in eth_dev_shared_data_prepare():
Cannot allocate ethdev shared data
0: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_dump_stack+0x41) [7dbb0fe000b1]
1: /lib/x86_64-linux-gnu/librte_eal.so.23 (__rte_panic+0xc1) [7dbb0fde11c7]
2: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (7dbb0fedb000+0x8b16) [7dbb0fee3b16]
3: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (rte_eth_dev_allocate+0x31) [7dbb0feef971]
4: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_net_ice.so.23 (7dbb0f70e000+0x67465) [7dbb0f775465]
5: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_bus_pci.so.23 (7dbb0fc64000+0x4c76) [7dbb0fc68c76]
6: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_bus_pci.so.23 (7dbb0fc64000+0x8af4) [7dbb0fc6caf4]
7: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_bus_probe+0x23) [7dbb0fdeeab3]
8: /lib/x86_64-linux-gnu/librte_eal.so.23 (7dbb0fdd4000+0x123bf) [7dbb0fde63bf]
9: dpdk-testpmd (5813e0022000+0x45150) [5813e0067150]
10: /lib/x86_64-linux-gnu/libc.so.6 (7dbb0ec00000+0x28150) [7dbb0ec28150]
11: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0x89) [7dbb0ec28209]
12: dpdk-testpmd (5813e0022000+0x48e55) [5813e006ae55]
Aborted

****************************************************************************************************



Thanks & Regards
--
Lokesh Chakka.


On Mon, Apr 1, 2024 at 2:50 AM Lokesh Chakka <lvenkatakumarchakka@gmail.com> wrote:
hi Stephen,

Thanks for the reply. Following is the observation...

*************************************************************
$ dpdk-hugepages.py -s
Node Pages Size Total
2    512   2Mb    1Gb
6    512   2Mb    1Gb

Hugepages mounted on /dev/hugepages /mnt/huge

$ sudo dpdk-hugepages.py -p 1G --setup 2G
Unable to set pages (0 instead of 2 in /sys/devices/system/node/node4/hugepages/hugepages-1048576kB/nr_hugepages).
*************************************************************


Regards
--
Lokesh Chakka.


On Mon, Apr 1, 2024 at 12:36 AM Stephen Hemminger <stephen@networkplumber.org> wrote:
On Sun, 31 Mar 2024 16:28:19 +0530
Lokesh Chakka <lvenkatakumarchakka@gmail.com> wrote:

> Hello,
>
> I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y install
> dpdk*"
>
> added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
> added "vm.nr_hugepages=1024" in /etc/sysctl.conf
>
> rebooted the machine and then did devbind using the following command:
>
> sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
> 63:00.1
>
> Huge page info is as follows :
>
> *************************************************
> $ cat /proc/meminfo | grep Huge
> AnonHugePages:      6144 kB
> ShmemHugePages:        0 kB
> FileHugePages:         0 kB
> HugePages_Total:    1024
> HugePages_Free:     1023
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> Hugetlb:         2097152 kB
> *************************************************

Your hugepages are not setup correctly. The mount is for 1G pages
and the sysctl entry makes 2M pages.

Did you try using the dpdk-hugepages script?