From: James Huang <jamsphon@gmail.com>
To: users@dpdk.org
Subject: [dpdk-users] DPDK program huge core file size
Date: Fri, 19 Feb 2021 11:18:39 -0800 [thread overview]
Message-ID: <CAFpuyR7Mf-MMOx3Jyzy7rZNfqf6e6WwxBqdAuUAOxGwZHdjrtw@mail.gmail.com> (raw)
On CentOS7, we observed that the program (based on dpdk 19.11) creates a
huge core file size, i.e. 100+GB, far bigger than the expected <4GB. even
though the system only installs 16GB memory, and allocates 1GB hugepage
size at boot time. no matter if the core file is created by program panic
(segfault), or run with tool gcore.
On CentOS 6, the program (based on dpdk 17.05), the core file is the
expected size.
On CentOS7, we tried to adjust the process coredump_filter combinations, it
found only when clean the bit 0 can avoid the huge core size, however, a
cleared bit 0 generate small core file (200MB) and is meaningless for debug
purposes, i.e. gdb bt command does not output.
Is there a way to avoid dumping the hugepage memory, while remaining other
memory in the core file?
The following is the program pmap output comparison.
on CentOS 6, the hugepage resides on the process user space:
...
00007f4e80000000 1048576K rw-s- /mnt/huge_1GB/rtemap_0
00007f4ec0000000 2048K rw-s-
/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/resource0
00007f4ec0200000 16K rw-s-
/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/resource4
00007f4ec0204000 2048K rw-s-
/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/resource0
00007f4ec0404000 16K rw-s-
/sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/resource4
...
on CentOS 7, the hugepage resides on the process system space::
...
0000000100000000 20K rw-s- config
0000000100005000 184K rw-s- fbarray_memzone
0000000100033000 4K rw-s- fbarray_memseg-1048576k-0-0
0000000140000000 1048576K rw-s- rtemap_0
0000000180000000 32505856K r---- [ anon ]
0000000940000000 4K rw-s- fbarray_memseg-1048576k-0-1
0000000980000000 33554432K r---- [ anon ]
0000001180000000 4K rw-s- fbarray_memseg-1048576k-0-2
00000011c0000000 33554432K r---- [ anon ]
00000019c0000000 4K rw-s- fbarray_memseg-1048576k-0-3
0000001a00000000 33554432K r---- [ anon ]
0000002200000000 1024K rw-s- resource0
0000002200100000 16K rw-s- resource3
0000002200104000 1024K rw-s- resource0
0000002200204000 16K rw-s- resource3
...
Thanks,
-James
next reply other threads:[~2021-02-19 19:18 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-19 19:18 James Huang [this message]
2021-02-23 19:22 ` James Huang
2021-02-24 3:59 ` Li Feng
2021-02-25 17:37 ` James Huang
2021-02-25 18:23 ` James Huang
2021-02-26 16:00 ` David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFpuyR7Mf-MMOx3Jyzy7rZNfqf6e6WwxBqdAuUAOxGwZHdjrtw@mail.gmail.com \
--to=jamsphon@gmail.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).