From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 5D06729B6 for ; Thu, 14 Apr 2016 16:46:32 +0200 (CEST) Received: from [10.16.0.195] (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id C5BE924E75; Thu, 14 Apr 2016 16:45:49 +0200 (CEST) Message-ID: <570FAD3E.6040509@6wind.com> Date: Thu, 14 Apr 2016 16:46:22 +0200 From: Olivier MATZ User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.7.0 MIME-Version: 1.0 To: Thomas Monjalon , sergio.gonzalez.monroy@intel.com CC: dev@dpdk.org References: <1500486.8lzTDt5Q91@xps13> In-Reply-To: <1500486.8lzTDt5Q91@xps13> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] memory allocation requirements X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Apr 2016 14:46:32 -0000 Hi, On 04/13/2016 06:03 PM, Thomas Monjalon wrote: > After looking at the patches for container support, it appears that > some changes are needed in the memory management: > http://thread.gmane.org/gmane.comp.networking.dpdk.devel/32786/focus=32788 > > I think it is time to collect what are the needs and expectations of > the DPDK memory allocator. The goal is to satisfy every needs while > cleaning the API. > Here is a first try to start the discussion. > > The memory allocator has 2 classes of API in DPDK. > First the user/application allows or requires DPDK to take over some > memory resources of the system. The characteristics can be: > - numa node > - page size > - swappable or not > - contiguous (cannot be guaranteed) or not > - physical address (as root only) > Then the drivers or other libraries use the memory through > - rte_malloc > - rte_memzone > - rte_mempool > I think we can integrate the characteristics of the requested memory > in rte_malloc. Then rte_memzone would be only a named rte_malloc. > The rte_mempool still focus on collection of objects with cache. Just to mention that some evolutions [1] are planned in mempool in 16.07, allowing to populate a mempool with several chunks of memory, and still ensuring that the objects are physically contiguous. It completely removes the need to allocate a big virtually contiguous memory zone (and also physically contiguous if not using rte_mempool_create_xmem(), which is probably the case in most of the applications). Knowing this, the code that remaps the hugepages to get the largest possible physically contiguous zone probably becomes useless after the mempool series. Changing it to only one mmap(file) in hugetlbfs per NUMA socket would clearly simplify this part of EAL. For other allocations that must be physically contiguous (ex: zones shared with the hardware), having a page-sized granularity is maybe enough. Regards, Olivier [1] http://dpdk.org/ml/archives/dev/2016-April/037464.html