From: siddarth rai <siddsr@gmail.com>
To: "Meunier, Julien (Nokia - FR/Paris-Saclay)" <julien.meunier@nokia.com>
Cc: David Marchand <david.marchand@redhat.com>,
"Burakov, Anatoly" <anatoly.burakov@intel.com>,
dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] Big spike in DPDK VSZ
Date: Fri, 31 Jan 2020 17:44:02 +0530 [thread overview]
Message-ID: <CAGxAMwBEmxk4hbZ8WQTVTsPFT96bCsQtZStdzdNsW08wr2NtxA@mail.gmail.com> (raw)
In-Reply-To: <VI1PR07MB547226FCAD0BBEEF80423EED85040@VI1PR07MB5472.eurprd07.prod.outlook.com>
Hi,
I have created a ticket for the same -
https://bugs.dpdk.org/show_bug.cgi?id=386
Regards,
Siddarth
On Thu, Jan 30, 2020 at 6:45 PM Meunier, Julien (Nokia - FR/Paris-Saclay) <
julien.meunier@nokia.com> wrote:
> Hi,
>
> I noticed also this same behavior since DPDK 18.05. As David said, it is
> related to the virtual space management in DPDK.
> Please check the commit 66cc45e293ed ("mem: replace memseg with memseg
> lists") which introduces this new memory management.
>
> If you use mlockall in your application, all virtual space are locked, and
> if you dump PageTable in /proc/meminfo, you will see a huge memory usage on
> Kernel side.
> I am not an expert of the memory management topic, especially in the
> kernel, but what I observed is that mlockall locks also unused virtual
> memory space.
>
> For testpmd, you can use in the testpmd command line the flag
> --no-mlockall.
>
> For your application, you can use the flag MCL_ONFAULT (kernel > 4.4).
> man mlockall::
>
> Mark all current (with MCL_CURRENT) or future (with MCL_FUTURE)
> mappings to lock pages when they are faulted in. When used with
> MCL_CURRENT, all present pages are locked, but mlockall() will not
> fault in non-present pages. When used with MCL_FUTURE, all future
> mappings will be marked to lock pages when they are faulted in, but
> they will not be populated by the lock when the mapping is created.
> MCL_ONFAULT must be used with either MCL_CURRENT or MCL_FUTURE or
> both.
>
> These options will not reduce the VSZ, but at least, will not allocate
> unused memory.
> Otherwise, you need to customize your DPDK .config in order to configure
> RTE_MAX_MEM_MB are related parameters for your specific application.
>
> ---
> Julien Meunier
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of siddarth rai
> > Sent: Thursday, January 30, 2020 11:48 AM
> > To: David Marchand <david.marchand@redhat.com>
> > Cc: Burakov, Anatoly <anatoly.burakov@intel.com>; dev <dev@dpdk.org>
> > Subject: Re: [dpdk-dev] Big spike in DPDK VSZ
> >
> > Hi,
> >
> > I did some further experiments and found out that version 18.02.2
> doesn't have
> > the problem, but the 18.05.1 release has it.
> >
> > Would really appreciate if someone can help, if there is a patch to get
> over this
> > issue in the DPDK code ?
> > This is becoming a huge practical issue for me as on multi NUMA setup,
> the VSZ
> > goes above 400G and I can't get core files to debug crashes in my app.
> >
> > Regards,
> > Siddarth
> >
> >
> > Regards,
> > Siddarth
> >
> > On Thu, Jan 30, 2020 at 2:21 PM David Marchand
> > <david.marchand@redhat.com>
> > wrote:
> >
> > > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai <siddsr@gmail.com> wrote:
> > > > I have been using DPDK 19.08 and I notice the process VSZ is huge.
> > > >
> > > > I tried running the test PMD. It takes 64G VSZ and if I use the
> > > > '--in-memory' option it takes up to 188G.
> > > >
> > > > Is there anyway to disable allocation of such huge VSZ in DPDK ?
> > >
> > > *Disclaimer* I don't know the arcanes of the mem subsystem.
> > >
> > > I suppose this is due to the memory allocator in dpdk that reserves
> > > unused virtual space (for memory hotplug + multiprocess).
> > >
> > > If this is the case, maybe we could do something to enhance the
> > > situation for applications that won't care about multiprocess.
> > > Like inform dpdk that the application won't use multiprocess and skip
> > > those reservations.
> > >
> > > Or another idea would be to limit those reservations to what is passed
> > > via --socket-limit.
> > >
> > > Anatoly?
> > >
> > >
> > >
> > > --
> > > David Marchand
> > >
> > >
>
next prev parent reply other threads:[~2020-01-31 12:14 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-30 7:48 siddarth rai
2020-01-30 8:51 ` David Marchand
2020-01-30 10:47 ` siddarth rai
2020-01-30 13:15 ` Meunier, Julien (Nokia - FR/Paris-Saclay)
2020-01-31 12:14 ` siddarth rai [this message]
2020-03-10 15:26 ` David Marchand
2020-02-04 10:23 ` Burakov, Anatoly
2020-02-04 10:55 ` siddarth rai
2020-02-04 11:13 ` Burakov, Anatoly
2020-02-04 11:57 ` siddarth rai
2020-02-04 12:07 ` siddarth rai
2020-02-04 16:18 ` Burakov, Anatoly
2020-02-11 8:11 ` David Marchand
2020-02-11 10:28 ` Burakov, Anatoly
2020-02-02 9:22 ` David Marchand
2020-02-04 10:20 ` Burakov, Anatoly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGxAMwBEmxk4hbZ8WQTVTsPFT96bCsQtZStdzdNsW08wr2NtxA@mail.gmail.com \
--to=siddsr@gmail.com \
--cc=anatoly.burakov@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=julien.meunier@nokia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).