DPDK patches and discussions
 help / color / mirror / Atom feed
From: Alex Markuze <alex@weka.io>
To: Newman Poborsky <newman555p@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM
Date: Thu, 18 Dec 2014 16:21:19 +0200	[thread overview]
Message-ID: <CAKfHP0WH+7ZJO9L5ME96uwkeW4h4zjRCG=mn9Ef2=ptD_mybOA@mail.gmail.com> (raw)
In-Reply-To: <CAHW=9PuuEjnA5jnuGaHkB28aaznrJdNysh=398oE3LwOFovQQg@mail.gmail.com>

I've Also seen a similar issue when trying to run a dpdk app which
allocates huge pools(~0.5GB) after a memory heavy operation on the machine.

I've come to the same conclusion as you did, that internal fragmentation is
causing pool creation failures.
It seems that the rte_mempool_xmem_create/rte_memzone_reserve_aligned are
attempting to create physicaly contiguous pools. Which may offer a slight
performance gain(?) but may cause unpredictable allocation issues which is
a big risk for DC deployments where hundreds or even thousands of machines
may be deployed with a dpdk app and fail inexplicably.

I didn't really get the chance to digg into the memory managment internals
of DPDK, so feel free to correct me where I'm off.

Thanks.

On Thu, Dec 18, 2014 at 3:25 PM, Newman Poborsky <newman555p@gmail.com>
wrote:
>
> Hi,
>
> could someone please provide any explanation why sometimes mempool creation
> fails with ENOMEM?
>
> I run my test app several times without any problems and then I start
> getting ENOMEM error when creating mempool that are used for packets. I try
> to delete everything from /mnt/huge, I increase the number of huge pages,
> remount /mnt/huge but nothing helps.
>
> There is more than enough memory on server. I tried to debug
> rte_mempool_create() call and it seems that after server is restarted free
> mem segments are bigger than 2MB, but after running test app for several
> times, it seems that all free mem segments have a size of 2MB, and since I
> am requesting 8MB for my packet mempool, this fails.  I'm not really sure
> that this conclusion is correct.
>
> Does anybody have any idea what to check and how running my test app
> several times affects hugepages?
>
> For me, this doesn't make any since because after test app exits, resources
> should be freed, right?
>
> This has been driving me crazy for days now. I tried reading a bit more
> theory about hugepages, but didn't find out anything that could help me.
> Maybe it's something else and completely trivial, but I can't figure it
> out, so any help is appreciated.
>
> Thank you!
>
> BR,
> Newman P.
>

  reply	other threads:[~2014-12-18 14:21 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-18 13:25 Newman Poborsky
2014-12-18 14:21 ` Alex Markuze [this message]
2014-12-18 17:42 ` Ananyev, Konstantin
2014-12-18 20:03   ` Ananyev, Konstantin
2014-12-19 20:13     ` Newman Poborsky
2014-12-20  1:34       ` Stephen Hemminger
2014-12-22 10:48         ` Newman Poborsky
2015-01-08  8:19           ` Newman Poborsky
2015-01-10 19:26             ` Liran Zvibel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKfHP0WH+7ZJO9L5ME96uwkeW4h4zjRCG=mn9Ef2=ptD_mybOA@mail.gmail.com' \
    --to=alex@weka.io \
    --cc=dev@dpdk.org \
    --cc=newman555p@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).