DPDK patches and discussions
 help / color / mirror / Atom feed
From: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
To: Olivier MATZ <olivier.matz@6wind.com>,
	Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] memory allocation requirements
Date: Thu, 14 Apr 2016 16:39:02 +0100	[thread overview]
Message-ID: <570FB996.4070801@intel.com> (raw)
In-Reply-To: <570FAD3E.6040509@6wind.com>

On 14/04/2016 15:46, Olivier MATZ wrote:
> Hi,
>
> On 04/13/2016 06:03 PM, Thomas Monjalon wrote:
>> After looking at the patches for container support, it appears that
>> some changes are needed in the memory management:
>> http://thread.gmane.org/gmane.comp.networking.dpdk.devel/32786/focus=32788 
>>
>>
>> I think it is time to collect what are the needs and expectations of
>> the DPDK memory allocator. The goal is to satisfy every needs while
>> cleaning the API.
>> Here is a first try to start the discussion.
>>
>> The memory allocator has 2 classes of API in DPDK.
>> First the user/application allows or requires DPDK to take over some
>> memory resources of the system. The characteristics can be:
>>     - numa node
>>     - page size
>>     - swappable or not
>>     - contiguous (cannot be guaranteed) or not
>>     - physical address (as root only)
>> Then the drivers or other libraries use the memory through
>>     - rte_malloc
>>     - rte_memzone
>>     - rte_mempool
>> I think we can integrate the characteristics of the requested memory
>> in rte_malloc. Then rte_memzone would be only a named rte_malloc.
>> The rte_mempool still focus on collection of objects with cache.
>
> Just to mention that some evolutions [1] are planned in mempool in
> 16.07, allowing to populate a mempool with several chunks of memory,
> and still ensuring that the objects are physically contiguous. It
> completely removes the need to allocate a big virtually contiguous
> memory zone (and also physically contiguous if not using
> rte_mempool_create_xmem(), which is probably the case in most of
> the applications).
>
> Knowing this, the code that remaps the hugepages to get the largest
> possible physically contiguous zone probably becomes useless after
> the mempool series. Changing it to only one mmap(file) in hugetlbfs
> per NUMA socket would clearly simplify this part of EAL.
>

Are you suggesting to make those changes after the mempool series
has been applied but keeping the current memzone/malloc behavior?

Regards,
Sergio

> For other allocations that must be physically contiguous (ex: zones
> shared with the hardware), having a page-sized granularity is maybe
> enough.
>
> Regards,
> Olivier
>
> [1] http://dpdk.org/ml/archives/dev/2016-April/037464.html

  reply	other threads:[~2016-04-14 15:39 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-13 16:03 Thomas Monjalon
2016-04-13 17:00 ` Wiles, Keith
2016-04-14  8:48 ` Sergio Gonzalez Monroy
2016-04-14 14:46 ` Olivier MATZ
2016-04-14 15:39   ` Sergio Gonzalez Monroy [this message]
2016-04-15  7:12     ` Olivier Matz
2016-04-15  8:47       ` Sergio Gonzalez Monroy
2016-05-18 10:28 ` Alejandro Lucero

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=570FB996.4070801@intel.com \
    --to=sergio.gonzalez.monroy@intel.com \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).