DPDK usage discussions
 help / color / mirror / Atom feed
From: 洪全 <hongquan@iie.ac.cn>
To: "Stephen Hemminger" <stephen@networkplumber.org>
Cc: users@dpdk.org
Subject: Re: Re: Issue with Cannot allocate memory when using 32-bit DPDK application
Date: Wed, 3 Jul 2024 10:18:34 +0800 (GMT+08:00)	[thread overview]
Message-ID: <40fa5d47.836d6.1907662f372.Coremail.hongquan@iie.ac.cn> (raw)
In-Reply-To: <20240702080837.5e925cbb@hermes.local>




&gt; -----原始邮件-----
&gt; 发件人: "Stephen Hemminger" <stephen@networkplumber.org>
&gt; 发送时间: 2024-07-02 23:08:37 (星期二)
&gt; 收件人: "洪全" <hongquan@iie.ac.cn>
&gt; 抄送: users@dpdk.org
&gt; 主题: Re: Issue with Cannot allocate memory when using 32-bit DPDK application
&gt; 
&gt; On Tue, 2 Jul 2024 18:17:08 +0800 (GMT+08:00)
&gt; 洪全 <hongquan@iie.ac.cn> wrote:
&gt; 
&gt; &gt; Dear DPDK community,
&gt; &gt; 
&gt; &gt; 
&gt; &gt; I am encountering an issue when attempting to run a 32-bit DPDK application on Linux. Specifically, I am facing a "Cannot allocate memory" error during initialization. While I can mitigate this issue by using the `--no-huge` option, it adversely affects the performance of my application.
&gt; &gt; 
&gt; &gt; 
&gt; &gt; Here is the error output I receive:
&gt; &gt; 
&gt; &gt; 
&gt; &gt; ```
&gt; &gt; sudo ./app -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_tap001,iface=tap001 --no-pci
&gt; &gt; EAL: Detected CPU lcores: 2
&gt; &gt; EAL: Detected NUMA nodes: 1
&gt; &gt; EAL: Detected shared linkage of DPDK
&gt; &gt; EAL: Multi-process socket /var/run/dpdk/pmd1/mp_socket
&gt; &gt; EAL: Selected IOVA mode 'PA'
&gt; &gt; EAL: Cannot get a virtual area: Cannot allocate memory
&gt; &gt; EAL: Cannot allocate VA space for memseg list, retrying with different page size
&gt; &gt; EAL: Cannot allocate VA space on socket 0
&gt; &gt; EAL: FATAL: Cannot init memory
&gt; &gt; EAL: Cannot init memory
&gt; &gt; app: main.c:284: main: Assertion `(ret = rte_eal_init(argc, (char **) argv)) &gt;= 0' failed.
&gt; &gt; Aborted
&gt; &gt; ```
&gt; &gt; 
&gt; &gt; 
&gt; &gt; When debugging with `--log-level=eal,8`, the relevant portion of the output indicates attempts to allocate memory:
&gt; &gt; 
&gt; &gt; 
&gt; &gt; ```
&gt; &gt; EAL: Attempting to preallocate 2048M on socket 0
&gt; &gt; EAL: Ask a virtual area of 0xc000 bytes
&gt; &gt; EAL: Virtual area found at 0xeb077000 (size = 0xc000)
&gt; &gt; EAL: Memseg list allocated at socket 0, page size 0x800kB
&gt; &gt; EAL: Ask a virtual area of 0x80000000 bytes
&gt; &gt; EAL: Cannot mmap((nil), 0x80200000, 0x0, 0x22, -1, 0x0): Cannot allocate memory
&gt; &gt; EAL: Cannot get a virtual area: Cannot allocate memory
&gt; &gt; EAL: Cannot allocate VA space for memseg list, retrying with different page size
&gt; &gt; EAL: Cannot allocate VA space on socket 0
&gt; &gt; EAL: FATAL: Cannot init memory
&gt; &gt; EAL: Cannot init memory
&gt; &gt; app: main.c:284: main: Assertion `(ret = rte_eal_init(argc, (char **) argv)) &gt;= 0' failed.
&gt; &gt; Aborted
&gt; &gt; ```
&gt; &gt; 
&gt; &gt; 
&gt; &gt; System information:
&gt; &gt; - Hugepages configured: `echo 1024 &gt; /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages`
&gt; &gt; - Hugepages mounted: `mount -t hugetlbfs hugetlbfs /dev/hugepages`
&gt; &gt; - NUMA node information: `numactl --hardware`
&gt; &gt;   - available: 1 nodes (0)
&gt; &gt;   - node 0 cpus: 0 1
&gt; &gt;   - node 0 size: 7896 MB
&gt; &gt;   - node 0 free: 3915 MB
&gt; &gt;   - node distances:
&gt; &gt;     - node 0: 10
&gt; &gt; 
&gt; &gt; 
&gt; &gt; DPDK version: 22.03
&gt; &gt; Distribution: Ubuntu 22.04
&gt; &gt; Kernel information: Linux hq-virtual-machine 6.5.0-35-generic #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
&gt; &gt; 
&gt; &gt; 
&gt; &gt; I believe the issue stems from the attempt to preallocate 2048M on socket 0, but using `-m` or `--socket-mem` options did not resolve the problem.
&gt; &gt; 
&gt; &gt; 
&gt; &gt; Could you please provide guidance on how to properly configure DPDK to avoid this memory allocation issue while maximizing performance?
&gt; 
&gt; What CPU architecture? Traditionally on 32 bit x86 has 3GB for userspace and 1GB is the
&gt; shadow kernel. Some other architecture may do the same thing.
&gt; 
&gt; In userspace, you then have some memory for programs code, data and stack. The
&gt; DPDK EAL init then tries to map all of available huge pages (2G) and fails to find
&gt; enough contiguous virtual address space to fit.
&gt; 
&gt; Try smaller amount of huge pages.

My CPU architecture is x86-64.
Does trying a small number of huge pages mean allocating several 1GB huge pages?
According to the output of EAL: mmap((nil), 0x80200000, 0x0, 0x22, -1, 0x0). Do I need to allocate 3 1GB huge pages to satisfy this request? As you said, 32 bit x86 has 3GB for userspace. If I allocate all 3GB to huge pages, will there be an error?

In addition, I tried to reduce the amount of memory in the above mmap by using the --socket-mem or -m option, but it did not work.</hongquan@iie.ac.cn></hongquan@iie.ac.cn></stephen@networkplumber.org>

  reply	other threads:[~2024-07-03  2:18 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-02 10:17 洪全
2024-07-02 15:08 ` Stephen Hemminger
2024-07-03  2:18   ` 洪全 [this message]
2024-07-03  5:43     ` Stephen Hemminger
2024-07-03  6:04       ` 洪全

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40fa5d47.836d6.1907662f372.Coremail.hongquan@iie.ac.cn \
    --to=hongquan@iie.ac.cn \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).