From: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
To: Olivier Matz <olivier.matz@6wind.com>,
Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] memory allocation requirements
Date: Fri, 15 Apr 2016 09:47:33 +0100 [thread overview]
Message-ID: <5710AAA5.5090003@intel.com> (raw)
In-Reply-To: <5710946A.9080001@6wind.com>
On 15/04/2016 08:12, Olivier Matz wrote:
> Hi,
>
> On 04/14/2016 05:39 PM, Sergio Gonzalez Monroy wrote:
>>> Just to mention that some evolutions [1] are planned in mempool in
>>> 16.07, allowing to populate a mempool with several chunks of memory,
>>> and still ensuring that the objects are physically contiguous. It
>>> completely removes the need to allocate a big virtually contiguous
>>> memory zone (and also physically contiguous if not using
>>> rte_mempool_create_xmem(), which is probably the case in most of
>>> the applications).
>>>
>>> Knowing this, the code that remaps the hugepages to get the largest
>>> possible physically contiguous zone probably becomes useless after
>>> the mempool series. Changing it to only one mmap(file) in hugetlbfs
>>> per NUMA socket would clearly simplify this part of EAL.
>>>
>> Are you suggesting to make those changes after the mempool series
>> has been applied but keeping the current memzone/malloc behavior?
> I wonder if the default property of memzone/malloc which is to
> allocate physically contiguous memory shouldn't be dropped. It could
> remain optional, knowing that allocating a physically contiguous zone
> larger than a page cannot be guaranteed.
>
> But yes, I'm in favor of doing these changes in eal_memory.c, it would
> drop a lot a complex code (all rtemap* stuff), and today I'm not seeing
> any big issue of doing it... maybe we'll find one during the
> discussion :)
I'm in favor of doing those changes but then I think we need to support
allocating
no contig memory through memzone/malloc or other libraries such as
librte_hash
may not be able to get the memory they need, right?
Otherwise all library would need a rework like the mempool series to
deal with
non-contig memory.
For contig memory, I would prefer a new API for dma areas (something
similar to
rte_eth_dma_zone_reserve() in ethdev) that would transparently deal with
the case
where we have multiple huge page sizes.
Sergio
> Regards,
> Olivier
next prev parent reply other threads:[~2016-04-15 8:47 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-13 16:03 Thomas Monjalon
2016-04-13 17:00 ` Wiles, Keith
2016-04-14 8:48 ` Sergio Gonzalez Monroy
2016-04-14 14:46 ` Olivier MATZ
2016-04-14 15:39 ` Sergio Gonzalez Monroy
2016-04-15 7:12 ` Olivier Matz
2016-04-15 8:47 ` Sergio Gonzalez Monroy [this message]
2016-05-18 10:28 ` Alejandro Lucero
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5710AAA5.5090003@intel.com \
--to=sergio.gonzalez.monroy@intel.com \
--cc=dev@dpdk.org \
--cc=olivier.matz@6wind.com \
--cc=thomas.monjalon@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).