* [dpdk-dev] l2fwd mmap memory failed
@ 2014-09-14 9:01 jerry
2014-09-14 11:12 ` Zhang, Jerry
0 siblings, 1 reply; 3+ messages in thread
From: jerry @ 2014-09-14 9:01 UTC (permalink / raw)
To: dev; +Cc: luonengjun
Hi all,
The l2fwd sample application starts failed in my environment with 90000 2M hugepages set up.
It tells me that mmap failed: Cannot allocate memory.
Is there a limited max hugepages or memory size for dpdk to mmap EAL memory?
Some information as follows:
1. Environment:
Host OS: Suse11 Sp3 x86_64
NIC: intel 82599
DPDK: 1.6.0r2
2. Hugepage Configuration in Kernel command line:
hugepagesz=2M default_hugepagesz=2M hugepages=90000
3. l2fwd command line:
./build/l2fwd -c 0x3 -n 3 --socket-mem 2048 -- -q 1 -p 3
4. cat /proc/meminfo:
HugePages_Total: 90000
HugePages_Free: 90000
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
5. Error messages in dpdk log:
Sun Sep 14 08:37:28 2014:EAL: Detected lcore 0 as core 0 on socket 0
Sun Sep 14 08:37:28 2014:EAL: Detected lcore 1 as core 1 on socket 0
Sun Sep 14 08:37:28 2014:EAL: Setting up memory...
Sun Sep 14 08:38:44 2014:EAL: map_all_hugepages(): mmap failed: Cannot allocate memory
Sun Sep 14 08:38:44 2014:EAL: Failed to mmap 2 MB hugepages
Sun Sep 14 08:38:44 2014:EAL: Cannot init memory
Sun Sep 14 08:38:44 2014:EAL: Error - exiting with code: 1
Cause: Sun Sep 14 08:38:44 2014:Invalid EAL parameters
Anyone know the issue and the corresponding fix? Thanks.
B.R.
Jerry
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] l2fwd mmap memory failed
2014-09-14 9:01 [dpdk-dev] l2fwd mmap memory failed jerry
@ 2014-09-14 11:12 ` Zhang, Jerry
2014-09-15 0:59 ` jerry
0 siblings, 1 reply; 3+ messages in thread
From: Zhang, Jerry @ 2014-09-14 11:12 UTC (permalink / raw)
To: jerry, dev; +Cc: luonengjun
Hi,
>-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of jerry
>Sent: Sunday, September 14, 2014 5:01 PM
>To: dev@dpdk.org
>Cc: luonengjun@huawei.com
>Subject: [dpdk-dev] l2fwd mmap memory failed
>
>Hi all,
>
>The l2fwd sample application starts failed in my environment with 90000 2M
>hugepages set up.
>It tells me that mmap failed: Cannot allocate memory.
>Is there a limited max hugepages or memory size for dpdk to mmap EAL
>memory?
>
>Some information as follows:
>
>1. Environment:
>Host OS: Suse11 Sp3 x86_64
>NIC: intel 82599
>DPDK: 1.6.0r2
>
>2. Hugepage Configuration in Kernel command line:
>hugepagesz=2M default_hugepagesz=2M hugepages=90000
>
>3. l2fwd command line:
>./build/l2fwd -c 0x3 -n 3 --socket-mem 2048 -- -q 1 -p 3
>
>4. cat /proc/meminfo:
>HugePages_Total: 90000
>HugePages_Free: 90000
>HugePages_Rsvd: 0
>HugePages_Surp: 0
>Hugepagesize: 2048 kB
>
>5. Error messages in dpdk log:
>Sun Sep 14 08:37:28 2014:EAL: Detected lcore 0 as core 0 on socket 0 Sun Sep 14
>08:37:28 2014:EAL: Detected lcore 1 as core 1 on socket 0 Sun Sep 14 08:37:28
>2014:EAL: Setting up memory...
>Sun Sep 14 08:38:44 2014:EAL: map_all_hugepages(): mmap failed: Cannot
>allocate memory Sun Sep 14 08:38:44 2014:EAL: Failed to mmap 2 MB
>hugepages Sun Sep 14 08:38:44 2014:EAL: Cannot init memory Sun Sep 14
>08:38:44 2014:EAL: Error - exiting with code: 1
> Cause: Sun Sep 14 08:38:44 2014:Invalid EAL parameters
>
>
>Anyone know the issue and the corresponding fix? Thanks.
It's a system mmap failure.
As you have to map so many huge pages, I suggest that you check /proc/sys/vm/max_map_count to see if the map count exceeds the max value.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] l2fwd mmap memory failed
2014-09-14 11:12 ` Zhang, Jerry
@ 2014-09-15 0:59 ` jerry
0 siblings, 0 replies; 3+ messages in thread
From: jerry @ 2014-09-15 0:59 UTC (permalink / raw)
To: Zhang, Jerry, dev; +Cc: luonengjun
Hi,
On 2014/9/14 19:12, Zhang, Jerry wrote:
> Hi,
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of jerry
>> Sent: Sunday, September 14, 2014 5:01 PM
>> To: dev@dpdk.org
>> Cc: luonengjun@huawei.com
>> Subject: [dpdk-dev] l2fwd mmap memory failed
>>
>> Hi all,
>>
>> The l2fwd sample application starts failed in my environment with 90000 2M
>> hugepages set up.
>> It tells me that mmap failed: Cannot allocate memory.
>> Is there a limited max hugepages or memory size for dpdk to mmap EAL
>> memory?
>>
>> Some information as follows:
>>
>> 1. Environment:
>> Host OS: Suse11 Sp3 x86_64
>> NIC: intel 82599
>> DPDK: 1.6.0r2
>>
>> 2. Hugepage Configuration in Kernel command line:
>> hugepagesz=2M default_hugepagesz=2M hugepages=90000
>>
>> 3. l2fwd command line:
>> ./build/l2fwd -c 0x3 -n 3 --socket-mem 2048 -- -q 1 -p 3
>>
>> 4. cat /proc/meminfo:
>> HugePages_Total: 90000
>> HugePages_Free: 90000
>> HugePages_Rsvd: 0
>> HugePages_Surp: 0
>> Hugepagesize: 2048 kB
>>
>> 5. Error messages in dpdk log:
>> Sun Sep 14 08:37:28 2014:EAL: Detected lcore 0 as core 0 on socket 0 Sun Sep 14
>> 08:37:28 2014:EAL: Detected lcore 1 as core 1 on socket 0 Sun Sep 14 08:37:28
>> 2014:EAL: Setting up memory...
>> Sun Sep 14 08:38:44 2014:EAL: map_all_hugepages(): mmap failed: Cannot
>> allocate memory Sun Sep 14 08:38:44 2014:EAL: Failed to mmap 2 MB
>> hugepages Sun Sep 14 08:38:44 2014:EAL: Cannot init memory Sun Sep 14
>> 08:38:44 2014:EAL: Error - exiting with code: 1
>> Cause: Sun Sep 14 08:38:44 2014:Invalid EAL parameters
>>
>>
>> Anyone know the issue and the corresponding fix? Thanks.
>
> It's a system mmap failure.
> As you have to map so many huge pages, I suggest that you check /proc/sys/vm/max_map_count to see if the map count exceeds the max value.
>
>
That works when echo 200000 > /proc/sys/vm/max_map_count.
I have tried the count 100000 and it still failed. This reason may be that dpdk calls map_all_hugepages() twice in my opinion.
Thanks for your help.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-09-15 0:55 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-14 9:01 [dpdk-dev] l2fwd mmap memory failed jerry
2014-09-14 11:12 ` Zhang, Jerry
2014-09-15 0:59 ` jerry
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).