From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: Kypriotis Angelos <Aggelos.Kypriotis@nsn.com>,
"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] buffer allocation failure in NUMA platform
Date: Fri, 30 May 2014 12:43:37 +0000 [thread overview]
Message-ID: <C6ECDF3AB251BE4894318F4E451236976CC9908F@IRSMSX101.ger.corp.intel.com> (raw)
In-Reply-To: <53886F51.6060807@nsn.com>
Hi Kypriotis,
> cat /sys/devices/system/node/node1/hugepages/hugepages-
> 2048kB/nr_hugepages
> --> 1024
> cat /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> --> 1024
>
> When I try to pin my process to specific nodes like this :
>
> numactl --physcpubind=12-15 -- ./my_simulator -c f000 -n 2 -b
> 0000:01:00.0 --socket-mem=0,2048 -- my_simulator.conf ,
>
> I get 'RING:cannot reserve memory' error. Here is the relevant part :
>
> EAL: Setting up memory...
> EAL: Ask a virtual area of 0x1200000 bytes
> EAL: Virtual area found at 0x7f217a600000 (size = 0x1200000)
> EAL: Ask a virtual area of 0x3ec00000 bytes
> EAL: Virtual area found at 0x7f213b800000 (size = 0x3ec00000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7f213b400000 (size = 0x200000)
> EAL: Ask a virtual area of 0x3fc00000 bytes
> EAL: Virtual area found at 0x7f20fb600000 (size = 0x3fc00000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7f20fb200000 (size = 0x200000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7f20fae00000 (size = 0x200000)
> EAL: Ask a virtual area of 0x7fc00000 bytes
> EAL: Virtual area found at 0x7f207b000000 (size = 0x7fc00000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7f207ac00000 (size = 0x200000)
> EAL: Ask a virtual area of 0x200000 bytes
> EAL: Virtual area found at 0x7f207a800000 (size = 0x200000)
> EAL: Requesting 1024 pages of size 2MB from socket 1
> EAL: TSC frequency is ~1600000 KHz
> EAL: Master core 12 is ready (tid=81c69840)
> EAL: Core 13 is ready (tid=7fd55700)
> EAL: Core 15 is ready (tid=7ed53700)
> EAL: Core 14 is ready (tid=7f554700)
> RING: Cannot reserve memory
> Failed to allocate packet buffer pool.
>
> If I do not use --socket-mem=0,2048 memory ( 1024 pages ) is requested
> from both sockets 0 and 1 and it works, but I want to use node 1 only.
>
> In addition in order to request for 1024 pages in node 1 I must pass the --
> socket-mem parameter as above. I would expect that what I should pass is --
> socket-mem=0,1024, but then the actual memory requested is 512. Am I
> missing something here or is it a bug?
This is by design. If you want memory from specific socket, use --socket-mem. The "512" number you get is the number of pages. Each page is 2MB in size, so that amounts to 512 * 2MB == 1024MB requested, just as you specified on your command-line.
Best regards,
Anatoly Burakov
DPDK SW Engineer
prev parent reply other threads:[~2014-05-30 12:43 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-30 11:45 Kypriotis Angelos
2014-05-30 12:43 ` Burakov, Anatoly [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C6ECDF3AB251BE4894318F4E451236976CC9908F@IRSMSX101.ger.corp.intel.com \
--to=anatoly.burakov@intel.com \
--cc=Aggelos.Kypriotis@nsn.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).