DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mario Gianni" <m.gianni@engineer.com>
To: "Bruce Richardson" <bruce.richardson@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Cannot mmap device resource in DPDK 1.7.0 multi-process/multi-thread
Date: Fri, 24 Oct 2014 15:04:26 +0200	[thread overview]
Message-ID: <trinity-04385791-6db2-49d2-933c-4389af48016d-1414155865899@3capp-mailcom-lxa11> (raw)
In-Reply-To: <20141024120803.GA5804@BRICHA3-MOBL>

Hi Bruce, 
thank you for your answer, adding cores to the primary mask didn't help, instead it helped manually passing the --base-virtaddr parameter, setting it to the first value of Virtual Area that EAL finds when it starts the primary process.
 
Honestly I don't understand why it works in this way, in the experimental phase this could be a patch, but in the final program I have to automate this process, do you have any suggestions?
For example is there a way to find the virtual area before starting the primary process?
 
Mario
 

Sent: Friday, October 24, 2014 at 2:08 PM
From: "Bruce Richardson" <bruce.richardson@intel.com>
To: "Mario Gianni" <m.gianni@engineer.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Cannot mmap device resource in DPDK 1.7.0 multi-process/multi-thread
On Fri, Oct 24, 2014 at 01:21:08PM +0200, Mario Gianni wrote:
> Hi all, I have a problem since I updated to 1.7.0 version,
> I got a multi-process, multi-threaded application,
> In my application first I launch a master process, then I launch a secondary process with multiple threads in it
> Well, when the number of lcores reserved for the secondary process exceeds a certain number (eg. 4) i got an error in rte_eal_init() on the secondary process when it tries to map PCI memory:
>
> EAL: pci_map_resource(): cannot mmap(12, 0x7ffff2e96000, 0x800000, 0x1000): Success (0x7ffff559b000)
> EAL: Cannot mmap device resource
> EAL: Error - exiting with code: 1
> Cause: Requested device 0000:01:00.0 cannot be used
>
> Can you help me?

This could be because the additional memory/stack space used by the pthreads
for the cores in the secondary process is overlapping the space used in the
primary process for hugepage or device memory. You could perhaps try adding
a few cores to the primary process's coremask (and not using those cores)
and see if it helps things.
Alternatively there is a base-virtaddr parameter that can be passed to the
primary process to try and adjust the starting address for it mapping
memory. If you look at where it starts mapping memory right now, and then
try hinting to it to maps the pages at a slightly higher or lower address
and see if it helps.

/Bruce

  reply	other threads:[~2014-10-24 12:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-24 11:21 Mario Gianni
2014-10-24 12:08 ` Bruce Richardson
2014-10-24 13:04   ` Mario Gianni [this message]
2014-10-24 13:39     ` Bruce Richardson
2014-10-24 15:03       ` Mario Gianni
2014-10-28 12:19         ` Richardson, Bruce

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=trinity-04385791-6db2-49d2-933c-4389af48016d-1414155865899@3capp-mailcom-lxa11 \
    --to=m.gianni@engineer.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).