DPDK usage discussions
 help / color / mirror / Atom feed
From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: hugepage allocation mapping failure
Date: Wed, 4 Sep 2024 22:23:06 +0000	[thread overview]
Message-ID: <CH3PR01MB847096E58EAF53571CE9115B8F9C2@CH3PR01MB8470.prod.exchangelabs.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 5664 bytes --]

Hi Dmitry,
I hope you don't mind if I reach out to you for hugepage memory mapping to memseg list issue that intermittently occurs.

We are seeing on occasion the DPDK allocation of hugepages fail.
DPDK version 22.11.2
Oracle 91 OS with kernel 5.14.0-284
The VM is configured with 32GB memory and 8 vCPU cores.
Setup for 2 x 1GB = 2GB hugepage total
We dynamically allocate hugepages before our application starts, is not done in grub but done in a bash script.

I turned on EAL debug in our application, which shows debug messages during EAL init.


Enable dpdk log EAL in nsprobe.
EAL: lib.eal log level changed from info to debug
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 0 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 0 on socket 0
EAL: Detected lcore 6 as core 0 on socket 0
EAL: Detected lcore 7 as core 0 on socket 0
EAL: Maximum logical cores by configuration: 128
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Checking presence of .so 'librte_eal.so.23.0'
EAL: Checking presence of .so 'librte_eal.so.23'
EAL: Checking presence of .so 'librte_eal.so'
EAL: Detected static linkage of DPDK
EAL: Ask a virtual area of 0x2000 bytes
EAL: Virtual area found at 0x100000000 (size = 0x2000)
[New Thread 0x7fed931ff640 (LWP 287600)]
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
[New Thread 0x7fed929fe640 (LWP 287601)]
EAL: PCI driver net_iavf for device 0000:00:05.0 wants IOVA as 'PA'
EAL: PCI driver net_ice_dcf for device 0000:00:05.0 wants IOVA as 'PA'
EAL: PCI driver net_iavf for device 0000:00:06.0 wants IOVA as 'PA'
EAL: PCI driver net_ice_dcf for device 0000:00:06.0 wants IOVA as 'PA'
EAL: Bus pci wants IOVA as 'PA'
EAL: Bus vdev wants IOVA as 'DC'
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
EAL: VFIO modules not loaded, skipping VFIO support...
EAL: Ask a virtual area of 0x2e000 bytes
EAL: Virtual area found at 0x100002000 (size = 0x2e000)
EAL: Setting up physically contiguous memory...
EAL: Setting maximum number of open files to 1024
EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824
EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
EAL: Creating 1 segment lists: n_segs:2 socket_id:0 hugepage_sz:1073741824
EAL: Ask a virtual area of 0x1000 bytes
EAL: Virtual area found at 0x100030000 (size = 0x1000)
EAL: Memseg list allocated at socket 0, page size 0x100000kB
EAL: Ask a virtual area of 0x80000000 bytes
EAL: Virtual area found at 0x140000000 (size = 0x80000000)
EAL: VA reserved for memseg list at 0x140000000, size 80000000
EAL: Creating 1 segment lists: n_segs:1024 socket_id:0 hugepage_sz:2097152
EAL: Ask a virtual area of 0xd000 bytes
EAL: Virtual area found at 0x1c0000000 (size = 0xd000)
EAL: Memseg list allocated at socket 0, page size 0x800kB
EAL: Ask a virtual area of 0x80000000 bytes
EAL: Virtual area found at 0x1c0200000 (size = 0x80000000)
EAL: VA reserved for memseg list at 0x1c0200000, size 80000000
EAL: Trying to obtain current memory policy.
EAL: Setting policy MPOL_PREFERRED for socket 0
EAL: Setting policy MPOL_PREFERRED for socket 0
EAL: Restoring previous memory policy: 0
EAL: Hugepage /mnt/huge/rtemap_1 is on socket 0
EAL: Hugepage /mnt/huge/rtemap_0 is on socket 0
EAL: Requesting 2 pages of size 1024MB from socket 0    <<<< Same on good and bad
EAL: Attempting to map 1024M on socket 0      <<<< here, on good VM it states Attempting to map 2048M on socket 0, we have one numa node or 1 socket.
EAL: Allocated 1024M on socket 0                         <<<< here, it allocated the 1024M on socket 0.
EAL: Attempting to map 1024M on socket 0      <<<< here, attempts to map last 1G to socket 0.
EAL: Could not find space for memseg. Please increase 1024 and/or 2048 in configuration.   <<<
EAL: Couldn't remap hugepage files into memseg lists      <<<<
EAL: FATAL: Cannot init memory
EAL: Cannot init memory


//good
EAL: Hugepage /mnt/huge/rtemap_1 is on socket 0
EAL: Hugepage /mnt/huge/rtemap_0 is on socket 0
EAL: Requesting 2 pages of size 1024MB from socket 0
EAL: Attempting to map 2048M on socket 0
EAL: Allocated 2048M on socket 0
EAL: Added 2048M to heap on socket 0

Could it be that the hugpages are not contiguous and reboot clears this issue, not able to confirm.
I tried rebooting the VM 10 times and could not get it to fail.
Tried multiple VMs and sometimes fails.
Seen on VMWare VM and openStack VMs.

Few months back you helped me reduce the VIRT memory of our application.

I added the following before building the dpdk static libraries that are used in our application build.

#define DPDK_REDUCE_VIRT_8G   // is used to select the reduced MSL, etc reductions.

#if defined(DPDK_ORIGINAL) // original, VIRT: 36.6 GB
#define RTE_MAX_MEMSEG_LISTS 128
#define RTE_MAX_MEMSEG_PER_LIST 8192
#define RTE_MAX_MEM_MB_PER_LIST 32768
#define RTE_MAX_MEMSEG_PER_TYPE 32768
#define RTE_MAX_MEM_MB_PER_TYPE 65536
#endif

#if defined(DPDK_REDUCE_VIRT_8G)  // VIRT: 5.9 GB
#define RTE_MAX_MEMSEG_LISTS 2
#define RTE_MAX_MEMSEG_PER_LIST 1024
#define RTE_MAX_MEM_MB_PER_LIST 2048
#define RTE_MAX_MEMSEG_PER_TYPE 1024
#defin
e RTE_MAX_MEM_MB_PER_TYPE 2048
#endif

We provide to rte_eal_init() the following arguments:
'app_name, -c0x2, -n4, --socket-mem=2048, --legacy-mem, --no-telemetry'

What do you suggest to eliminate this intermittent map to memseg list issue?

Thanks,
Ed

[-- Attachment #2: Type: text/html, Size: 11982 bytes --]

             reply	other threads:[~2024-09-04 22:23 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-04 22:23 Lombardo, Ed [this message]
2024-09-06 13:42 ` hugepage mapping to memseg failure Lombardo, Ed
2024-09-07 20:35   ` Dmitry Kozlyuk
2024-09-10 20:42     ` Lombardo, Ed
2024-09-10 22:36       ` Dmitry Kozlyuk
2024-09-11  4:26         ` Lombardo, Ed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CH3PR01MB847096E58EAF53571CE9115B8F9C2@CH3PR01MB8470.prod.exchangelabs.com \
    --to=ed.lombardo@netscout.com \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).