From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 42B1CB468 for ; Thu, 5 Feb 2015 15:59:36 +0100 (CET) Received: from hmsreliant.think-freely.org ([2001:470:8:a08:7aac:c0ff:fec2:933b] helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.63) (envelope-from ) id 1YJNu1-00084C-Am; Thu, 05 Feb 2015 09:59:35 -0500 Date: Thu, 5 Feb 2015 09:59:32 -0500 From: Neil Horman To: "Damjan Marion (damarion)" Message-ID: <20150205145932.GD28355@hmsreliant.think-freely.org> References: <736BD68D-C5DF-4883-A720-DAD8A2A866BE@cisco.com> <20150205125948.GC28355@hmsreliant.think-freely.org> <2F27D954-A245-45DC-A1BE-0CA3E17AAD3B@cisco.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2F27D954-A245-45DC-A1BE-0CA3E17AAD3B@cisco.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Status: No Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] mmap fails with more than 40000 hugepages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Feb 2015 14:59:36 -0000 On Thu, Feb 05, 2015 at 01:20:01PM +0000, Damjan Marion (damarion) wrote: > > > On 05 Feb 2015, at 13:59, Neil Horman wrote: > > > > On Thu, Feb 05, 2015 at 12:00:48PM +0000, Damjan Marion (damarion) wrote: > >> Hi, > >> > >> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK crashes in rte_eal_init() > >> when number of available hugepages is around 40000 or above. > >> Everything works fine with lower values (i.e. 30000). > >> > >> I also tried with allocating 40000 on node0 and 0 on node1, same crash happens. > >> > >> > >> Any idea what might be causing this? > >> > >> Thanks, > >> > >> Damjan > >> > >> > >> $ cat /sys/devices/system/node/node[01]/hugepages/hugepages-2048kB/nr_hugepages > >> 20000 > >> 20000 > >> > >> $ grep -i huge /proc/meminfo > >> AnonHugePages: 706560 kB > >> HugePages_Total: 40000 > >> HugePages_Free: 40000 > >> HugePages_Rsvd: 0 > >> HugePages_Surp: 0 > >> Hugepagesize: 2048 kB > >> > > Whats your shmmax value set to? 40000 2MB hugepages is way above the default > > setting for how much shared ram a system will allow. I've not done the math on > > your logs below, but judging by the size of some of the mapped segments, I'm > > betting your hitting the default limit of 4GB. > > $ cat /proc/sys/kernel/shmmax > 33554432 > > $ sysctl -w kernel.shmmax=8589934592 > kernel.shmmax = 8589934592 > > same crash :( > > Thanks, > > Damjan What about the shmmni and shmmax values. The shmmax value will also need to be set to at least 80G (more if you have other shared memory needs), and shmmni will need to be larger than 40,000 to handle all the segments your creating. Neil