DPDK usage discussions
 help / color / mirror / Atom feed
From: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
To: Renata Saiakhova <Renata.Saiakhova@oneaccess-net.com>, users@dpdk.org
Subject: Re: [dpdk-users] rte_segments: hugepages are not in contiguous memory
Date: Tue, 4 Oct 2016 09:27:54 +0100	[thread overview]
Message-ID: <c1201b6d-4211-dfc1-3c21-3498e81b5230@intel.com> (raw)
In-Reply-To: <57F36199.5020100@oneaccess-net.com>

On 04/10/2016 09:00, Renata Saiakhova wrote:
> Hi all,
>
> I'm using dpdk 16.04 (I tried 16.07 with the same results) and linux 
> kernel 4.4.20 in a virtual machine (I'm using libvirt framework). I 
> pass a parameter in kernel command line to allocate 512 hugepages of 2 
> MB at boot time. They are successfully allocated. When an application 
> with dpdk starts it calls rte_pktmbuf_pool_create() which in turns 
> requests internally 649363712 bytes.  Those bytes should be allocated 
> from one of rte_memseg. rte_memsegs describes contiguous portions of 
> memory (both physical and virtual) built on hugepages. This allocation 
> fails, because there are no rte_memsegs of this size (or bigger). 
> Further debugging shows that hugepages are allocated in non-contiguous 
> physical memory and therefore rte_memsegs are built respecting gaps in 
> physical memory.
> Below are the sizes of segments built on hugepages (in bytes)
> 2097152
> 6291456
> 2097152
> 524288000
> 2097152
> 532676608
> 2097152
> 2097152
> So there are 5 segments which includes only one hugepage!
> This behavior is completely different to what I observe with linux 
> kernel 3.8 (used with the same application with dpdk) - where all 
> hugepages are allocated in contiguous memory.
> Does anyone experience the same issue? Could it be some kernel option 
> which can do the magic? If not, and kernel can allocated hugepages in 
> non-contiguous memory how dpdk is going to resolve it?
>

I don't think there is anything we can do to force the kernel to 
pre-allocate contig hugepages on boot. If there was, we wouldn't need to 
do all this mapping sorting and grouping we do on DPDK
as we would rely on the kernel giving us pre-allocated contig hugepages.

If you have plenty of memory one possible work around would be to 
increase the number of default hugepages so we are likely to find more 
contiguous ones.

Is using 1GB hugepages a possibility in your case?

Sergio

> Thanks in advance,
> Renata
>

  reply	other threads:[~2016-10-04  8:27 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-04  8:00 Renata Saiakhova
2016-10-04  8:27 ` Sergio Gonzalez Monroy [this message]
2016-10-04  9:38   ` Renata Saiakhova
2016-10-04 10:13     ` Andriy Berestovskyy
2016-10-04 10:48       ` Renata Saiakhova
2016-10-04 11:27         ` Andriy Berestovskyy
2016-10-04 12:02           ` tom.barbette
2016-10-04 14:09             ` Sergio Gonzalez Monroy
2016-10-06 11:02               ` tom.barbette

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1201b6d-4211-dfc1-3c21-3498e81b5230@intel.com \
    --to=sergio.gonzalez.monroy@intel.com \
    --cc=Renata.Saiakhova@oneaccess-net.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).