DPDK patches and discussions
 help / color / mirror / Atom feed
From: Neil Horman <nhorman@tuxdriver.com>
To: "Damjan Marion (damarion)" <damarion@cisco.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] mmap fails with more than 40000 hugepages
Date: Thu, 5 Feb 2015 09:59:32 -0500	[thread overview]
Message-ID: <20150205145932.GD28355@hmsreliant.think-freely.org> (raw)
In-Reply-To: <2F27D954-A245-45DC-A1BE-0CA3E17AAD3B@cisco.com>

On Thu, Feb 05, 2015 at 01:20:01PM +0000, Damjan Marion (damarion) wrote:
> 
> > On 05 Feb 2015, at 13:59, Neil Horman <nhorman@tuxdriver.com> wrote:
> > 
> > On Thu, Feb 05, 2015 at 12:00:48PM +0000, Damjan Marion (damarion) wrote:
> >> Hi,
> >> 
> >> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK crashes in rte_eal_init()
> >> when number of available hugepages is around 40000 or above.
> >> Everything works fine with lower values (i.e. 30000).
> >> 
> >> I also tried with allocating 40000 on node0 and 0 on node1, same crash happens.
> >> 
> >> 
> >> Any idea what might be causing this?
> >> 
> >> Thanks,
> >> 
> >> Damjan
> >> 
> >> 
> >> $ cat /sys/devices/system/node/node[01]/hugepages/hugepages-2048kB/nr_hugepages
> >> 20000
> >> 20000
> >> 
> >> $ grep -i huge /proc/meminfo
> >> AnonHugePages:    706560 kB
> >> HugePages_Total:   40000
> >> HugePages_Free:    40000
> >> HugePages_Rsvd:        0
> >> HugePages_Surp:        0
> >> Hugepagesize:       2048 kB
> >> 
> > Whats your shmmax value set to? 40000 2MB hugepages is way above the default
> > setting for how much shared ram a system will allow.  I've not done the math on
> > your logs below, but judging by the size of some of the mapped segments, I'm
> > betting your hitting the default limit of 4GB.
> 
> $ cat /proc/sys/kernel/shmmax
> 33554432
> 
> $ sysctl -w kernel.shmmax=8589934592
> kernel.shmmax = 8589934592
> 
> same crash :(
> 
> Thanks,
> 
> Damjan

What about the shmmni and shmmax values.  The shmmax value will also need to be
set to at least 80G (more if you have other shared memory needs), and shmmni
will need to be larger than 40,000 to handle all the segments your creating.
Neil

  reply	other threads:[~2015-02-05 14:59 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-05 12:00 Damjan Marion (damarion)
2015-02-05 12:59 ` Neil Horman
2015-02-05 13:20   ` Damjan Marion (damarion)
2015-02-05 14:59     ` Neil Horman [this message]
2015-02-05 15:41       ` Damjan Marion (damarion)
2015-02-05 16:35         ` Neil Horman
2015-02-05 13:22 ` Jay Rolette
2015-02-05 13:36   ` Damjan Marion (damarion)
2015-02-05 15:09     ` Jay Rolette
2015-02-06  1:46 ` Linhaifeng
2015-02-06  2:04 ` Linhaifeng
2015-02-06 10:26   ` De Lara Guarch, Pablo
2015-02-06 10:31     ` Damjan Marion (damarion)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150205145932.GD28355@hmsreliant.think-freely.org \
    --to=nhorman@tuxdriver.com \
    --cc=damarion@cisco.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).