DPDK patches and discussions
 help / color / mirror / Atom feed
From: John Wei <johntwei@gmail.com>
To: "Tan, Jianfeng" <jianfeng.tan@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Fwd: EAL: map_all_hugepages(): mmap failed: Cannot allocate memory
Date: Fri, 18 Mar 2016 10:24:32 -0700	[thread overview]
Message-ID: <CAGaeUppf6vfQnOozJJF5Ck3w2itaFNh5cP-FznzHjTGTAUzuLA@mail.gmail.com> (raw)
In-Reply-To: <56EB6D29.9020907@intel.com>

Thanks for the reply. Upon further debugging, I was able to root caused the
issue. In the cgroup, in addition to limiting the CPU, I also limited the
node where my OVS can allocate the memory (cpuset.mems). I understand that
DPDK first grab all the memory, then pick the best memory pages, then
release the rest. But this is taking a long time for my case that I started
many OVSs on the same host.
Each DPDK app will need to wait for previous app to release the memory
before next app can proceed.
In addition, since I have specified that (through cgroup cpuset.mems) dont
get memory from other node, DPDK library may be can skipp grabbing memory
from these excluded nodes?

Just some thoughts.

John


On Thu, Mar 17, 2016 at 7:51 PM, Tan, Jianfeng <jianfeng.tan@intel.com>
wrote:

>
>
> On 3/18/2016 6:41 AM, John Wei wrote:
>
> I am setting up OVS inside a Linux container. This OVS is built using DPDK
> library.
> During the startup of ovs-vswitchd, it core dumped due to fail to mmap.
>       in eal_memory.c
>        virtaddr = mmap(vma_addr, hugepage_sz, PROT_READ | PROT_WRITE,
>                 MAP_SHARED, fd, 0);
>
> This call is made inside a for loop that loops through all the pages and
> mmap them.
> My server has two cores, and I allocated 8192 2MB pages.
> The mmap for the first 4096 pages were successful. It failed when trying to
> map 4096th page.
>
> Can someone help me understand when the mmap for the first 4096 pages were
> successful and it failed on 4096th page?
>
>
> In my limited experience, there are some scenario that may lead to such
> failure: a. specified an option size when do mount hugetlbfs; b. cgroup
> limitation, /sys/fs/cgroup/hugetlb/<cgroup
> name>/hugetlb.2MB.limit_in_bytes; c. open files by ulimit...
>
> Workaround: as only "--socket-mem 128,128" is needed, you can reduce the
> total number of 2M hugepages from 8192 to 512 (or something else).
> In addition: this is a case why I sent a patchset:
> http://dpdk.org/dev/patchwork/patch/11194/
>
> Thanks,
> Jianfeng
>
>
>
> John
>
>
>
> ovs-vswitchd --dpdk -c 0x1 -n 4 -l 1 --file-prefix ct0000- --socket-mem
> 128,128 -- unix:$DB_SOCK --pidfile --detach --log-file=ct.log
>
>
> EAL: Detected lcore 23 as core 5 on socket 1
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 24 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: map_all_hugepages(): mmap failed: Cannot allocate memory
> EAL: Failed to mmap 2 MB hugepages
> PANIC in rte_eal_init():
> Cannot init memory
> 7: [ovs-vswitchd() [0x411f15]]
> 6: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7ff5f6133b15]]
> 5: [ovs-vswitchd() [0x4106f9]]
> 4: [ovs-vswitchd() [0x66917d]]
> 3: [ovs-vswitchd() [0x42b6f5]]
> 2: [ovs-vswitchd() [0x40dd8c]]
> 1: [ovs-vswitchd() [0x56b3ba]]
> Aborted (core dumped)
>
>
>

      reply	other threads:[~2016-03-18 17:24 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAGaeUppJkrWXxcx5tMyeeJiW2ivGmPVAimYW1zNA4A=pSV2Z_g@mail.gmail.com>
2016-03-17 22:41 ` John Wei
2016-03-18  2:51   ` Tan, Jianfeng
2016-03-18 17:24     ` John Wei [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGaeUppf6vfQnOozJJF5Ck3w2itaFNh5cP-FznzHjTGTAUzuLA@mail.gmail.com \
    --to=johntwei@gmail.com \
    --cc=dev@dpdk.org \
    --cc=jianfeng.tan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).