From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id A88F2475E for ; Thu, 14 Apr 2016 17:39:04 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 14 Apr 2016 08:39:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,485,1455004800"; d="scan'208";a="955044922" Received: from smonroyx-mobl.ger.corp.intel.com (HELO [10.237.220.53]) ([10.237.220.53]) by orsmga002.jf.intel.com with ESMTP; 14 Apr 2016 08:39:02 -0700 To: Olivier MATZ , Thomas Monjalon References: <1500486.8lzTDt5Q91@xps13> <570FAD3E.6040509@6wind.com> Cc: dev@dpdk.org From: Sergio Gonzalez Monroy Message-ID: <570FB996.4070801@intel.com> Date: Thu, 14 Apr 2016 16:39:02 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.7.2 MIME-Version: 1.0 In-Reply-To: <570FAD3E.6040509@6wind.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] memory allocation requirements X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Apr 2016 15:39:05 -0000 On 14/04/2016 15:46, Olivier MATZ wrote: > Hi, > > On 04/13/2016 06:03 PM, Thomas Monjalon wrote: >> After looking at the patches for container support, it appears that >> some changes are needed in the memory management: >> http://thread.gmane.org/gmane.comp.networking.dpdk.devel/32786/focus=32788 >> >> >> I think it is time to collect what are the needs and expectations of >> the DPDK memory allocator. The goal is to satisfy every needs while >> cleaning the API. >> Here is a first try to start the discussion. >> >> The memory allocator has 2 classes of API in DPDK. >> First the user/application allows or requires DPDK to take over some >> memory resources of the system. The characteristics can be: >> - numa node >> - page size >> - swappable or not >> - contiguous (cannot be guaranteed) or not >> - physical address (as root only) >> Then the drivers or other libraries use the memory through >> - rte_malloc >> - rte_memzone >> - rte_mempool >> I think we can integrate the characteristics of the requested memory >> in rte_malloc. Then rte_memzone would be only a named rte_malloc. >> The rte_mempool still focus on collection of objects with cache. > > Just to mention that some evolutions [1] are planned in mempool in > 16.07, allowing to populate a mempool with several chunks of memory, > and still ensuring that the objects are physically contiguous. It > completely removes the need to allocate a big virtually contiguous > memory zone (and also physically contiguous if not using > rte_mempool_create_xmem(), which is probably the case in most of > the applications). > > Knowing this, the code that remaps the hugepages to get the largest > possible physically contiguous zone probably becomes useless after > the mempool series. Changing it to only one mmap(file) in hugetlbfs > per NUMA socket would clearly simplify this part of EAL. > Are you suggesting to make those changes after the mempool series has been applied but keeping the current memzone/malloc behavior? Regards, Sergio > For other allocations that must be physically contiguous (ex: zones > shared with the hardware), having a page-sized granularity is maybe > enough. > > Regards, > Olivier > > [1] http://dpdk.org/ml/archives/dev/2016-April/037464.html