DPDK usage discussions
 help / color / mirror / Atom feed
From: Renata Saiakhova <Renata.Saiakhova@oneaccess-net.com>
To: Andriy Berestovskyy <aber@semihalf.com>
Cc: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>,
	users <users@dpdk.org>
Subject: Re: [dpdk-users] rte_segments: hugepages are not in contiguous memory
Date: Tue, 4 Oct 2016 12:48:05 +0200	[thread overview]
Message-ID: <57F388E5.3010405@oneaccess-net.com> (raw)
In-Reply-To: <CAOysbxoxT2nqxTmM0yc+evC4ih+aXS7jGV9PJw3e=p3gCvWJpQ@mail.gmail.com>

Hi Andriy,

thanks for your reply. I guess that contiguous memory is requested 
because of the performance reasons. Do you know if I can expect a 
noticeable performance drop using non-contiguous memory?

Renata

On 10/04/2016 12:13 PM, Andriy Berestovskyy wrote:
> Hi Renata,
> DPDK supports non-contiguous memory pools, but
> rte_pktmbuf_pool_create() uses rte_mempool_create_empty() with flags
> set to zero, i.e. requests contiguous memory.
>
> As a workaround, in rte_pktmbuf_pool_create() try to pass
> MEMPOOL_F_NO_PHYS_CONTIG flag as the last argument to
> rte_mempool_create_empty().
>
> Note that KNI and some PMDs in 16.07 still require contiguous memory
> pools, so the trick might not work for your setup. For the KNI try the
> DPDK's master branch which includes the commit by Ferruh Yigit:
>
> 8451269 kni: remove continuous memory restriction
>
> Regards,
> Andriy
>
>
> On Tue, Oct 4, 2016 at 11:38 AM, Renata Saiakhova
> <Renata.Saiakhova@oneaccess-net.com> wrote:
>> Hi Sergio,
>>
>> thank you for your quick answer. I also tried to allocate 1GB hugepage, but
>> seems kernel fails to allocate it: previously I've seen that HugePages_Total
>> in /proc/meminfo is set to 0, now - kernel hangs at boot time (don't know
>> why).
>> But anyway, if there is no way to control hugepage allocation in the sense
>> they are in contiguous memory there is only way to accept it and adapt the
>> code that it creates several pools which in total satisfy the requested
>> size.
>>
>> Renata
>>
>>
>> On 10/04/2016 10:27 AM, Sergio Gonzalez Monroy wrote:
>>> On 04/10/2016 09:00, Renata Saiakhova wrote:
>>>> Hi all,
>>>>
>>>> I'm using dpdk 16.04 (I tried 16.07 with the same results) and linux
>>>> kernel 4.4.20 in a virtual machine (I'm using libvirt framework). I pass a
>>>> parameter in kernel command line to allocate 512 hugepages of 2 MB at boot
>>>> time. They are successfully allocated. When an application with dpdk starts
>>>> it calls rte_pktmbuf_pool_create() which in turns requests internally
>>>> 649363712 bytes.  Those bytes should be allocated from one of rte_memseg.
>>>> rte_memsegs describes contiguous portions of memory (both physical and
>>>> virtual) built on hugepages. This allocation fails, because there are no
>>>> rte_memsegs of this size (or bigger). Further debugging shows that hugepages
>>>> are allocated in non-contiguous physical memory and therefore rte_memsegs
>>>> are built respecting gaps in physical memory.
>>>> Below are the sizes of segments built on hugepages (in bytes)
>>>> 2097152
>>>> 6291456
>>>> 2097152
>>>> 524288000
>>>> 2097152
>>>> 532676608
>>>> 2097152
>>>> 2097152
>>>> So there are 5 segments which includes only one hugepage!
>>>> This behavior is completely different to what I observe with linux kernel
>>>> 3.8 (used with the same application with dpdk) - where all hugepages are
>>>> allocated in contiguous memory.
>>>> Does anyone experience the same issue? Could it be some kernel option
>>>> which can do the magic? If not, and kernel can allocated hugepages in
>>>> non-contiguous memory how dpdk is going to resolve it?
>>>>
>>> I don't think there is anything we can do to force the kernel to
>>> pre-allocate contig hugepages on boot. If there was, we wouldn't need to do
>>> all this mapping sorting and grouping we do on DPDK
>>> as we would rely on the kernel giving us pre-allocated contig hugepages.
>>>
>>> If you have plenty of memory one possible work around would be to increase
>>> the number of default hugepages so we are likely to find more contiguous
>>> ones.
>>>
>>> Is using 1GB hugepages a possibility in your case?
>>>
>>> Sergio
>>>
>>>> Thanks in advance,
>>>> Renata
>>>>
>>> .
>>>
>
>

  reply	other threads:[~2016-10-04 10:48 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-04  8:00 Renata Saiakhova
2016-10-04  8:27 ` Sergio Gonzalez Monroy
2016-10-04  9:38   ` Renata Saiakhova
2016-10-04 10:13     ` Andriy Berestovskyy
2016-10-04 10:48       ` Renata Saiakhova [this message]
2016-10-04 11:27         ` Andriy Berestovskyy
2016-10-04 12:02           ` tom.barbette
2016-10-04 14:09             ` Sergio Gonzalez Monroy
2016-10-06 11:02               ` tom.barbette

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57F388E5.3010405@oneaccess-net.com \
    --to=renata.saiakhova@oneaccess-net.com \
    --cc=aber@semihalf.com \
    --cc=sergio.gonzalez.monroy@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).