From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id E407B56A2 for ; Fri, 18 Mar 2016 03:51:24 +0100 (CET) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP; 17 Mar 2016 19:51:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,352,1455004800"; d="scan'208,217";a="766518820" Received: from shwdeisgchi083.ccr.corp.intel.com (HELO [10.239.67.193]) ([10.239.67.193]) by orsmga003.jf.intel.com with ESMTP; 17 Mar 2016 19:51:22 -0700 To: John Wei , dev@dpdk.org References: From: "Tan, Jianfeng" Message-ID: <56EB6D29.9020907@intel.com> Date: Fri, 18 Mar 2016 10:51:21 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Fwd: EAL: map_all_hugepages(): mmap failed: Cannot allocate memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Mar 2016 02:51:26 -0000 On 3/18/2016 6:41 AM, John Wei wrote: > I am setting up OVS inside a Linux container. This OVS is built using DPDK > library. > During the startup of ovs-vswitchd, it core dumped due to fail to mmap. > in eal_memory.c > virtaddr = mmap(vma_addr, hugepage_sz, PROT_READ | PROT_WRITE, > MAP_SHARED, fd, 0); > > This call is made inside a for loop that loops through all the pages and > mmap them. > My server has two cores, and I allocated 8192 2MB pages. > The mmap for the first 4096 pages were successful. It failed when trying to > map 4096th page. > > Can someone help me understand when the mmap for the first 4096 pages were > successful and it failed on 4096th page? In my limited experience, there are some scenario that may lead to such failure: a. specified an option size when do mount hugetlbfs; b. cgroup limitation, /sys/fs/cgroup/hugetlb//hugetlb.2MB.limit_in_bytes; c. open files by ulimit... Workaround: as only "--socket-mem 128,128" is needed, you can reduce the total number of 2M hugepages from 8192 to 512 (or something else). In addition: this is a case why I sent a patchset: http://dpdk.org/dev/patchwork/patch/11194/ Thanks, Jianfeng > > > John > > > > ovs-vswitchd --dpdk -c 0x1 -n 4 -l 1 --file-prefix ct0000- --socket-mem > 128,128 -- unix:$DB_SOCK --pidfile --detach --log-file=ct.log > > > EAL: Detected lcore 23 as core 5 on socket 1 > EAL: Support maximum 128 logical core(s) by configuration. > EAL: Detected 24 lcore(s) > EAL: No free hugepages reported in hugepages-1048576kB > EAL: VFIO modules not all loaded, skip VFIO support... > EAL: Setting up physically contiguous memory... > EAL: map_all_hugepages(): mmap failed: Cannot allocate memory > EAL: Failed to mmap 2 MB hugepages > PANIC in rte_eal_init(): > Cannot init memory > 7: [ovs-vswitchd() [0x411f15]] > 6: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7ff5f6133b15]] > 5: [ovs-vswitchd() [0x4106f9]] > 4: [ovs-vswitchd() [0x66917d]] > 3: [ovs-vswitchd() [0x42b6f5]] > 2: [ovs-vswitchd() [0x40dd8c]] > 1: [ovs-vswitchd() [0x56b3ba]] > Aborted (core dumped)