From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B3A5A0524; Fri, 31 Jan 2020 13:14:16 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD3751C0C2; Fri, 31 Jan 2020 13:14:15 +0100 (CET) Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by dpdk.org (Postfix) with ESMTP id 5C4221C029 for ; Fri, 31 Jan 2020 13:14:14 +0100 (CET) Received: by mail-pg1-f194.google.com with SMTP id 6so3371071pgk.0 for ; Fri, 31 Jan 2020 04:14:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=N3SYL1WWlNVb/mt/23WzPP+7DGeLwEcrVEy83xQpde8=; b=TZCj5stWjOFbQiYXp41Awc0+PcLBYS3CruhdcW5KeEEb7wQ1DyRODXK02XtqyPM7Sk 2GVPYqcD10x+5/PRAsEYPOm+LFyKThVp9jZWJpd/2z8wRUQwUgblII1QLSan5rwIydb1 KQAovc9aXkw/DDthIbT7YejLF0EmntleB9gQJHKVfgtgg732ofQ0WxJ/qrhkO2wya+wj VenAzr2HcePTpEja6wwzWBNTYULFZQLABMuGQEnoKbKYiDMufnXT0Lryi0acwtlkIKQy NifZ11iQ+IRH9tJGuXV/djQbcHHATqL29pWmQHU+P+ldWN9j+bM9Ws5wIvK6ACDAWcDm mwdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=N3SYL1WWlNVb/mt/23WzPP+7DGeLwEcrVEy83xQpde8=; b=DWt8WjKU9bZ2AzRk890vtlPZIyUqvuTlQQt0SGcMQuaKdY/f3dUm0s3J7zzqREElut UwXEaB4DM7p4Ycq//JnoXxhMZjahRESeTmARYNvXEZI3kt0cavB6QlyazCaCCTC683QL D94srPOtuMZt4COLCiinIRVNOrSnXLsqpjVsfGSWpr3uYIWU5v4X2RgtmX35je8vSHrH jBwzDiQKtqlqsEuSrw/NB4+LnCKa5+DQDY+E2DM4d+nQO58dSuQgXkK7RzkvgWv2bh8u IO1AeIHgShNWAxpMLOh07/BJ7P7te6ZQJiYhxvOBwWwJK5ceYfIFLiuZmlhKy+F2Kgr/ QOkA== X-Gm-Message-State: APjAAAWIUEMCez0/SpWKE7nmrcCo5wy6KrNCpBgpUiTnE2G1CczzXcjs cUVIXOKiA/b7Xh7pwCaAYJlFxvUXQSKozzwxaas= X-Google-Smtp-Source: APXvYqy/m9Y5oPr7cEGyOas77QYqWrQt22NZT49CvXQpuPr5AAMBK/3xY96KZN83prji2eDAzRIGb5XzyGbuq/RqI9Y= X-Received: by 2002:a62:342:: with SMTP id 63mr10357403pfd.19.1580472853435; Fri, 31 Jan 2020 04:14:13 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: siddarth rai Date: Fri, 31 Jan 2020 17:44:02 +0530 Message-ID: To: "Meunier, Julien (Nokia - FR/Paris-Saclay)" Cc: David Marchand , "Burakov, Anatoly" , dev Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Big spike in DPDK VSZ X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, I have created a ticket for the same - https://bugs.dpdk.org/show_bug.cgi?id=386 Regards, Siddarth On Thu, Jan 30, 2020 at 6:45 PM Meunier, Julien (Nokia - FR/Paris-Saclay) < julien.meunier@nokia.com> wrote: > Hi, > > I noticed also this same behavior since DPDK 18.05. As David said, it is > related to the virtual space management in DPDK. > Please check the commit 66cc45e293ed ("mem: replace memseg with memseg > lists") which introduces this new memory management. > > If you use mlockall in your application, all virtual space are locked, and > if you dump PageTable in /proc/meminfo, you will see a huge memory usage on > Kernel side. > I am not an expert of the memory management topic, especially in the > kernel, but what I observed is that mlockall locks also unused virtual > memory space. > > For testpmd, you can use in the testpmd command line the flag > --no-mlockall. > > For your application, you can use the flag MCL_ONFAULT (kernel > 4.4). > man mlockall:: > > Mark all current (with MCL_CURRENT) or future (with MCL_FUTURE) > mappings to lock pages when they are faulted in. When used with > MCL_CURRENT, all present pages are locked, but mlockall() will not > fault in non-present pages. When used with MCL_FUTURE, all future > mappings will be marked to lock pages when they are faulted in, but > they will not be populated by the lock when the mapping is created. > MCL_ONFAULT must be used with either MCL_CURRENT or MCL_FUTURE or > both. > > These options will not reduce the VSZ, but at least, will not allocate > unused memory. > Otherwise, you need to customize your DPDK .config in order to configure > RTE_MAX_MEM_MB are related parameters for your specific application. > > --- > Julien Meunier > > > -----Original Message----- > > From: dev On Behalf Of siddarth rai > > Sent: Thursday, January 30, 2020 11:48 AM > > To: David Marchand > > Cc: Burakov, Anatoly ; dev > > Subject: Re: [dpdk-dev] Big spike in DPDK VSZ > > > > Hi, > > > > I did some further experiments and found out that version 18.02.2 > doesn't have > > the problem, but the 18.05.1 release has it. > > > > Would really appreciate if someone can help, if there is a patch to get > over this > > issue in the DPDK code ? > > This is becoming a huge practical issue for me as on multi NUMA setup, > the VSZ > > goes above 400G and I can't get core files to debug crashes in my app. > > > > Regards, > > Siddarth > > > > > > Regards, > > Siddarth > > > > On Thu, Jan 30, 2020 at 2:21 PM David Marchand > > > > wrote: > > > > > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai wrote: > > > > I have been using DPDK 19.08 and I notice the process VSZ is huge. > > > > > > > > I tried running the test PMD. It takes 64G VSZ and if I use the > > > > '--in-memory' option it takes up to 188G. > > > > > > > > Is there anyway to disable allocation of such huge VSZ in DPDK ? > > > > > > *Disclaimer* I don't know the arcanes of the mem subsystem. > > > > > > I suppose this is due to the memory allocator in dpdk that reserves > > > unused virtual space (for memory hotplug + multiprocess). > > > > > > If this is the case, maybe we could do something to enhance the > > > situation for applications that won't care about multiprocess. > > > Like inform dpdk that the application won't use multiprocess and skip > > > those reservations. > > > > > > Or another idea would be to limit those reservations to what is passed > > > via --socket-limit. > > > > > > Anatoly? > > > > > > > > > > > > -- > > > David Marchand > > > > > > >