From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D8B9A0535; Tue, 4 Feb 2020 17:18:52 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 528251C214; Tue, 4 Feb 2020 17:18:52 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 7FCCB1C20E for ; Tue, 4 Feb 2020 17:18:50 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Feb 2020 08:18:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,402,1574150400"; d="scan'208";a="431570411" Received: from jbouche-mobl.ger.corp.intel.com (HELO [10.252.24.6]) ([10.252.24.6]) by fmsmga006.fm.intel.com with ESMTP; 04 Feb 2020 08:18:47 -0800 To: siddarth rai Cc: David Marchand , dev References: <6cebb805-91a3-c074-2380-8ec90ed6c132@intel.com> <66cc7e86-8a4e-b976-032b-52e2950fd243@intel.com> From: "Burakov, Anatoly" Message-ID: Date: Tue, 4 Feb 2020 16:18:47 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] Big spike in DPDK VSZ X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 04-Feb-20 12:07 PM, siddarth rai wrote: > Hi Anatoly, > > You mentioned that the maximum size of mempool is limited. > Can you tell what is the limit and where is it specified? > > Regards, > Siddarth > The mempool size itself isn't limited. However, due to the nature of how memory subsystem works, there is always an upper limit to how much memory you can reserve (because once you run out of pre-reserved address space, you will effectively run out of memory). Given that the current hardcoded upper limit for reserved address space is 128GB, this is effectively the limit you are referring to - you won't be able to create a mempool larger than 128GB because there is not enough reserved address space in which to put a bigger mempool. The same applies to any other memory allocation, with a caveat that while a mempool can be non-contiguous in terms of VA memory (i.e. consist of several discontiguous VA areas), a single unit of memory allocation (i.e. a call to rte_malloc or rte_memzone_create) can only be as big as the largest contiguous chunk of address space. Given that the current hardcoded limit is 16GB for 2M pages, and 32GB for 1GB pages, each allocation can be at most 16GB or 32GB long, depending on the page size from which you are allocating. So, on a system with 1G and 2M pages, a mempool can only be as big as 128GB, and each individual chunk of memory in that mempool can only be as big as 16GB for 2M pages, or 32GB for 1G pages. These are big numbers, so in practice no one hits these limitations. Again, this is just the price you have to pay for supporting dynamic memory allocation in secondary processes. There is simply no other way to guarantee that all shared memory will reside in the same address space in all processes. Notably, device hotplug doesn't provide such guarantee, which is why device hotplug (or even initialization) can fail in a secondary process. However, in device hotplug i suppose this is acceptable (or the community thinks it is). In the memory subsystem, i chose to be conservative and to always guarantee correctness, at a cost of placing an upper limit on memory allocations. -- Thanks, Anatoly