From: "Michael Hu (NSBU)" <humichael@vmware.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Subject: [dpdk-dev] dpdk starting issue with descending virtual address allocation in new kernel
Date: Wed, 10 Sep 2014 22:40:36 +0000 [thread overview]
Message-ID: <D03621C0.E4C3%humichael@vmware.com> (raw)
Hi All,
We have a kernel config question to consult you.
DPDK failed to start due to mbuf creation issue with new kernel 3.14.17 + grsecurity patches.
We tries to trace down the issue, it seems that the virtual address of huge page is allocated from high address to low address by kernel where dpdk expects it to be low to high to think it is as consecutive. See dumped virt address bellow. It is first 0x710421400000, then 0x710421200000. Where previously it would be 0x710421200000 first , then 0x710421400000. But they are still consecutive.
----
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 00:0c:29:b3:30:db
Create: Default RX 0:0 - Memory used (MBUFs 4096 x (size 1984 + Hdr 64)) + 790720 = 8965 KB
Zone 0: name:<RG_MP_log_history>, phys:0x6ac00000, len:0x2080, virt:0x710421400000, socket_id:0, flags:0
Zone 1: name:<MP_log_history>, phys:0x6ac02080, len:0x1d10c0, virt:0x710421402080, socket_id:0, flags:0
Zone 2: name:<MALLOC_S0_HEAP_0>, phys:0x6ae00000, len:0x160000, virt:0x710421200000, socket_id:0, flags:0
Zone 3: name:<rte_eth_dev_data>, phys:0x6add3140, len:0x11a00, virt:0x7104215d3140, socket_id:0, flags:0
Zone 4: name:<rte_vmxnet3_pmd_0_shared>, phys:0x6ade4b40, len:0x300, virt:0x7104215e4b40, socket_id:0, flags:0
Zone 5: name:<rte_vmxnet3_pmd_0_queuedesc>, phys:0x6ade4e80, len:0x200, virt:0x7104215e4e80, socket_id:0, flags:0
Zone 6: name:<RG_MP_Default RX 0:0>, phys:0x6ade5080, len:0x10080, virt:0x7104215e5080, socket_id:0, flags:0
Segment 0: phys:0x6ac00000, len:2097152, virt:0x710421400000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x6ae00000, len:2097152, virt:0x710421200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x6b000000, len:2097152, virt:0x710421000000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: phys:0x6b200000, len:2097152, virt:0x710420e00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: phys:0x6b400000, len:2097152, virt:0x710420c00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 5: phys:0x6b600000, len:2097152, virt:0x710420a00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 6: phys:0x6b800000, len:2097152, virt:0x710420800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 7: phys:0x6ba00000, len:2097152, virt:0x710420600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 8: phys:0x6bc00000, len:2097152, virt:0x710420400000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 9: phys:0x6be00000, len:2097152, virt:0x710420200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
---
Related dpdk code is in
dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c :: rte_eal_hugepage_init()
for (i = 0; i < nr_hugefiles; i++) {
new_memseg = 0;
/* if this is a new section, create a new memseg */
if (i == 0)
new_memseg = 1;
else if (hugepage[i].socket_id != hugepage[i-1].socket_id)
new_memseg = 1;
else if (hugepage[i].size != hugepage[i-1].size)
new_memseg = 1;
else if ((hugepage[i].physaddr - hugepage[i-1].physaddr) !=
hugepage[i].size)
new_memseg = 1;
else if (((unsigned long)hugepage[i].final_va -
(unsigned long)hugepage[i-1].final_va) != hugepage[i].size) {
new_memseg = 1;
}
Is this a known issue? Is there any workaround? Or Could you advise which kernel config may relate this this kernel behavior change?
Thanks,
Michael
next reply other threads:[~2014-09-10 22:35 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-10 22:40 Michael Hu (NSBU) [this message]
2014-09-11 9:53 ` Richardson, Bruce
2014-09-11 18:59 ` Michael Hu (NSBU)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D03621C0.E4C3%humichael@vmware.com \
--to=humichael@vmware.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).