DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Richardson, Bruce" <bruce.richardson@intel.com>
To: "Gooch, Stephen (Wind River)" <stephen.gooch@windriver.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] mmap() hint address
Date: Fri, 13 Jun 2014 21:27:55 +0000	[thread overview]
Message-ID: <59AF69C657FD0841A61C55336867B5B01AA361A6@IRSMSX103.ger.corp.intel.com> (raw)
In-Reply-To: <9205DC19ECCD944CA2FAC59508A772BABCEFF60C@ALA-MBA.corp.ad.wrs.com>

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Gooch, Stephen
> Sent: Friday, June 13, 2014 2:03 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] mmap() hint address
> 
> Hello,
> 
> I have seen a case where a secondary DPDK process tries to map uio resource in
> which mmap() normally sends the corresponding virtual address as a hint
> address.  However on some instances mmap() returns a virtual address that is
> not the hint address, and it result in rte_panic() and the secondary process goes
> defunct.
> 
> This happens from time to time on an embedded device when nr_hugepages is
> set to 128, but never when nr_hugepage is set to 256 on the same device.    My
> question is, if mmap() can find the correct memory regions when hugepages is
> set to 256, would it not require less resources (and therefore be more likely to
> pass) at a lower value such as 128?
> 
> Any ideas what would cause this mmap() behavior at a lower nr_hugepage
> value?
> 
> - Stephen

Hi Stephen,

That's a strange one!
I don't know for definite why this is happening, but here is one possible theory. :-)

It could be due to the size of the memory blocks that are getting mmapped. When you use 256 pages, the blocks of memory getting mapped may well be larger (depending on how fragmented in memory the 2MB pages are), and so may be getting mapped at a higher set of address ranges where there is more free memory. This set of address ranges is then free in the secondary process and it is similarly able to map the memory.
With the 128 hugepages, you may be looking for smaller amounts of memory and so the addresses get mapped in at a different spot in the virtual address space, one that may be more heavily used. Then when the secondary process tries to duplicate the mappings, it already has memory in that region in use and the mapping fails.
In short - one theory is that having bigger blocks to map causes the memory to be mapped to a different location in memory which is free from conflicts in the secondary process.

So, how to confirm or refute this, and generally debug this issue?
Well, in general we  would need to look at the messages printed out at startup in the primary process to see how big of blocks it is trying to map in each case, and where they end up in the virtual address-space.

Regards,
/Bruce

  reply	other threads:[~2014-06-13 21:31 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-13 21:02 Gooch, Stephen
2014-06-13 21:27 ` Richardson, Bruce [this message]
2014-06-16  8:00   ` Burakov, Anatoly
2014-06-20 14:36     ` Gooch, Stephen
2014-06-20 14:42       ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59AF69C657FD0841A61C55336867B5B01AA361A6@IRSMSX103.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=stephen.gooch@windriver.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).