From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E0A9A0531; Tue, 4 Feb 2020 12:13:32 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6C1181C012; Tue, 4 Feb 2020 12:13:32 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id CED4C1BFAB for ; Tue, 4 Feb 2020 12:13:30 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Feb 2020 03:13:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,401,1574150400"; d="scan'208";a="341195461" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.237.220.124]) ([10.237.220.124]) by fmsmga001.fm.intel.com with ESMTP; 04 Feb 2020 03:13:28 -0800 To: siddarth rai Cc: David Marchand , dev References: <6cebb805-91a3-c074-2380-8ec90ed6c132@intel.com> From: "Burakov, Anatoly" Message-ID: <66cc7e86-8a4e-b976-032b-52e2950fd243@intel.com> Date: Tue, 4 Feb 2020 11:13:28 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] Big spike in DPDK VSZ X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 04-Feb-20 10:55 AM, siddarth rai wrote: > Hi Anatoly, > > I don't need a secondary process. I understand that you don't, however that doesn't negate the fact that the codepath expects that you do. > > I tried out Julien's suggestion and set the param 'RTE_MAX_MEM_MB' value > to 8192 (the original value was over 500K). This works as a cap. > The virtual size dropped down to less than 8G. So this seems to be > working for me. > > I have a few queries/concerns though. > Is it safe to reduce the RTE_MAX_MEM_MB to such a low value ? Can I > reduce it further ? What will be the impact of doing so ? Will it limit > the maximum size of mbuf pool which I create ? It depends on your use case. The maximum size of mempool is limited as is, the better question is where to place that limit. In my experience, testpmd mempools are typically around 400MB per socket, so an 8G upper limit should not interfere with testpmd very much. However, depending on what else is there and what kind of allocations you may do, it may have other effects. Currently, the size of each internal per-NUMA node, per-page size page table is dictated by three constraints: maximum amount of memory per page table (so that we don't attempt to reserve thousands of 1G pages), maximum number of pages per page table (so that we aren't left with a few hundred megabytes' worth of 2M pages), and total maximum amount of memory (which places an upper limit on the sum of all page tables' memory amounts). You have lowered the latter to 8G which means that, depending on your system configuration, you will have at most 2G to 4G per page table. It is not possible to limit it further (for example, skip reservation on certain nodes or certain page sizes). Whether it will have an effect on your actual workload will depend on your use case. > > Regards, > Siddarth > > On Tue, Feb 4, 2020 at 3:53 PM Burakov, Anatoly > > wrote: > > On 30-Jan-20 8:51 AM, David Marchand wrote: > > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai > wrote: > >> I have been using DPDK 19.08 and I notice the process VSZ is huge. > >> > >> I tried running the test PMD. It takes 64G VSZ and if I use the > >> '--in-memory' option it takes up to 188G. > >> > >> Is there anyway to disable allocation of such huge VSZ in DPDK ? > > > > *Disclaimer* I don't know the arcanes of the mem subsystem. > > > > I suppose this is due to the memory allocator in dpdk that reserves > > unused virtual space (for memory hotplug + multiprocess). > > Yes, that's correct. In order to guarantee memory reservation > succeeding > at all times, we need to reserve all possible memory in advance. > Otherwise we may end up in a situation where primary process has > allocated a page, but the secondary can't map it because the address > space is already occupied by something else. > > > > > If this is the case, maybe we could do something to enhance the > > situation for applications that won't care about multiprocess. > > Like inform dpdk that the application won't use multiprocess and skip > > those reservations. > > You're welcome to try this, but i assure you, avoiding these > reservations is a lot of work, because you'd be adding a yet another > path to an already overly complex allocator :) > > > > > Or another idea would be to limit those reservations to what is > passed > > via --socket-limit. > > > > Anatoly? > > I have a patchset in the works that does this and was planning to > submit > it to 19.08, but things got in the way and it's still sitting there > collecting bit rot. This may be reason enough to resurrect it and > finish > it up :) > > > > > > > > > -- > > David Marchand > > > > > -- > Thanks, > Anatoly > -- Thanks, Anatoly