DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: siddarth rai <siddsr@gmail.com>
Cc: David Marchand <david.marchand@redhat.com>, dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] Big spike in DPDK VSZ
Date: Tue, 4 Feb 2020 11:13:28 +0000	[thread overview]
Message-ID: <66cc7e86-8a4e-b976-032b-52e2950fd243@intel.com> (raw)
In-Reply-To: <CAGxAMwB5Qyj1Yu8cMfBWAAyWqRr2U66v+cqSpkX8QzFtuxhocg@mail.gmail.com>

On 04-Feb-20 10:55 AM, siddarth rai wrote:
> Hi Anatoly,
> 
> I don't need a secondary process.

I understand that you don't, however that doesn't negate the fact that 
the codepath expects that you do.

> 
> I tried out Julien's suggestion and set the param 'RTE_MAX_MEM_MB' value 
> to 8192 (the original value was over 500K). This works as a cap.
> The virtual size dropped down to less than 8G. So this seems to be 
> working for me.
> 
> I have a few queries/concerns though.
> Is it safe to reduce the RTE_MAX_MEM_MB to such a low value ? Can I 
> reduce it further ? What will be the impact of doing so ? Will it limit 
> the maximum size of mbuf pool which I create ?

It depends on your use case. The maximum size of mempool is limited as 
is, the better question is where to place that limit. In my experience, 
testpmd mempools are typically around 400MB per socket, so an 8G upper 
limit should not interfere with testpmd very much. However, depending on 
what else is there and what kind of allocations you may do, it may have 
other effects.

Currently, the size of each internal per-NUMA node, per-page size page 
table is dictated by three constraints: maximum amount of memory per 
page table (so that we don't attempt to reserve thousands of 1G pages), 
maximum number of pages per page table (so that we aren't left with a 
few hundred megabytes' worth of 2M pages), and total maximum amount of 
memory (which places an upper limit on the sum of all page tables' 
memory amounts).

You have lowered the latter to 8G which means that, depending on your 
system configuration, you will have at most 2G to 4G per page table. It 
is not possible to limit it further (for example, skip reservation on 
certain nodes or certain page sizes). Whether it will have an effect on 
your actual workload will depend on your use case.

> 
> Regards,
> Siddarth
> 
> On Tue, Feb 4, 2020 at 3:53 PM Burakov, Anatoly 
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> 
>     On 30-Jan-20 8:51 AM, David Marchand wrote:
>      > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai <siddsr@gmail.com
>     <mailto:siddsr@gmail.com>> wrote:
>      >> I have been using DPDK 19.08 and I notice the process VSZ is huge.
>      >>
>      >> I tried running the test PMD. It takes 64G VSZ and if I use the
>      >> '--in-memory' option it takes up to 188G.
>      >>
>      >> Is there anyway to disable allocation of such huge VSZ in DPDK ?
>      >
>      > *Disclaimer* I don't know the arcanes of the mem subsystem.
>      >
>      > I suppose this is due to the memory allocator in dpdk that reserves
>      > unused virtual space (for memory hotplug + multiprocess).
> 
>     Yes, that's correct. In order to guarantee memory reservation
>     succeeding
>     at all times, we need to reserve all possible memory in advance.
>     Otherwise we may end up in a situation where primary process has
>     allocated a page, but the secondary can't map it because the address
>     space is already occupied by something else.
> 
>      >
>      > If this is the case, maybe we could do something to enhance the
>      > situation for applications that won't care about multiprocess.
>      > Like inform dpdk that the application won't use multiprocess and skip
>      > those reservations.
> 
>     You're welcome to try this, but i assure you, avoiding these
>     reservations is a lot of work, because you'd be adding a yet another
>     path to an already overly complex allocator :)
> 
>      >
>      > Or another idea would be to limit those reservations to what is
>     passed
>      > via --socket-limit.
>      >
>      > Anatoly?
> 
>     I have a patchset in the works that does this and was planning to
>     submit
>     it to 19.08, but things got in the way and it's still sitting there
>     collecting bit rot. This may be reason enough to resurrect it and
>     finish
>     it up :)
> 
>      >
>      >
>      >
>      > --
>      > David Marchand
>      >
> 
> 
>     -- 
>     Thanks,
>     Anatoly
> 


-- 
Thanks,
Anatoly

  reply	other threads:[~2020-02-04 11:13 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-30  7:48 siddarth rai
2020-01-30  8:51 ` David Marchand
2020-01-30 10:47   ` siddarth rai
2020-01-30 13:15     ` Meunier, Julien (Nokia - FR/Paris-Saclay)
2020-01-31 12:14       ` siddarth rai
2020-03-10 15:26     ` David Marchand
2020-02-04 10:23   ` Burakov, Anatoly
2020-02-04 10:55     ` siddarth rai
2020-02-04 11:13       ` Burakov, Anatoly [this message]
2020-02-04 11:57         ` siddarth rai
2020-02-04 12:07           ` siddarth rai
2020-02-04 16:18             ` Burakov, Anatoly
2020-02-11  8:11     ` David Marchand
2020-02-11 10:28       ` Burakov, Anatoly
2020-02-02  9:22 ` David Marchand
2020-02-04 10:20   ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=66cc7e86-8a4e-b976-032b-52e2950fd243@intel.com \
    --to=anatoly.burakov@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=siddsr@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).