DPDK patches and discussions
 help / color / mirror / Atom feed
From: siddarth rai <siddsr@gmail.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>
Cc: David Marchand <david.marchand@redhat.com>, dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] Big spike in DPDK VSZ
Date: Tue, 4 Feb 2020 17:37:09 +0530	[thread overview]
Message-ID: <CAGxAMwAGv2GwdWV93TEcrXhgJ77RdMxm3vT7EOOv1Q549KOPuw@mail.gmail.com> (raw)
In-Reply-To: <CAGxAMwBJjmQhNoDXORXSdbCwG0j=b0jPx5ROF-_Raj=tq3Kwbw@mail.gmail.com>

Hi Anatoly,

You mentioned that the maximum size of mempool is limited.
Can you tell what is the limit and where is it specified?

Regards,
Siddarth

On Tue, Feb 4, 2020 at 5:27 PM siddarth rai <siddsr@gmail.com> wrote:

> Hi,
>
> Thanks for the clarification
>
> Regards,
> Siddarth
>
> On Tue, Feb 4, 2020 at 4:43 PM Burakov, Anatoly <anatoly.burakov@intel.com>
> wrote:
>
>> On 04-Feb-20 10:55 AM, siddarth rai wrote:
>> > Hi Anatoly,
>> >
>> > I don't need a secondary process.
>>
>> I understand that you don't, however that doesn't negate the fact that
>> the codepath expects that you do.
>>
>> >
>> > I tried out Julien's suggestion and set the param 'RTE_MAX_MEM_MB'
>> value
>> > to 8192 (the original value was over 500K). This works as a cap.
>> > The virtual size dropped down to less than 8G. So this seems to be
>> > working for me.
>> >
>> > I have a few queries/concerns though.
>> > Is it safe to reduce the RTE_MAX_MEM_MB to such a low value ? Can I
>> > reduce it further ? What will be the impact of doing so ? Will it limit
>> > the maximum size of mbuf pool which I create ?
>>
>> It depends on your use case. The maximum size of mempool is limited as
>> is, the better question is where to place that limit. In my experience,
>> testpmd mempools are typically around 400MB per socket, so an 8G upper
>> limit should not interfere with testpmd very much. However, depending on
>> what else is there and what kind of allocations you may do, it may have
>> other effects.
>>
>> Currently, the size of each internal per-NUMA node, per-page size page
>> table is dictated by three constraints: maximum amount of memory per
>> page table (so that we don't attempt to reserve thousands of 1G pages),
>> maximum number of pages per page table (so that we aren't left with a
>> few hundred megabytes' worth of 2M pages), and total maximum amount of
>> memory (which places an upper limit on the sum of all page tables'
>> memory amounts).
>>
>> You have lowered the latter to 8G which means that, depending on your
>> system configuration, you will have at most 2G to 4G per page table. It
>> is not possible to limit it further (for example, skip reservation on
>> certain nodes or certain page sizes). Whether it will have an effect on
>> your actual workload will depend on your use case.
>>
>> >
>> > Regards,
>> > Siddarth
>> >
>> > On Tue, Feb 4, 2020 at 3:53 PM Burakov, Anatoly
>> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
>> >
>> >     On 30-Jan-20 8:51 AM, David Marchand wrote:
>> >      > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai <siddsr@gmail.com
>> >     <mailto:siddsr@gmail.com>> wrote:
>> >      >> I have been using DPDK 19.08 and I notice the process VSZ is
>> huge.
>> >      >>
>> >      >> I tried running the test PMD. It takes 64G VSZ and if I use the
>> >      >> '--in-memory' option it takes up to 188G.
>> >      >>
>> >      >> Is there anyway to disable allocation of such huge VSZ in DPDK ?
>> >      >
>> >      > *Disclaimer* I don't know the arcanes of the mem subsystem.
>> >      >
>> >      > I suppose this is due to the memory allocator in dpdk that
>> reserves
>> >      > unused virtual space (for memory hotplug + multiprocess).
>> >
>> >     Yes, that's correct. In order to guarantee memory reservation
>> >     succeeding
>> >     at all times, we need to reserve all possible memory in advance.
>> >     Otherwise we may end up in a situation where primary process has
>> >     allocated a page, but the secondary can't map it because the address
>> >     space is already occupied by something else.
>> >
>> >      >
>> >      > If this is the case, maybe we could do something to enhance the
>> >      > situation for applications that won't care about multiprocess.
>> >      > Like inform dpdk that the application won't use multiprocess and
>> skip
>> >      > those reservations.
>> >
>> >     You're welcome to try this, but i assure you, avoiding these
>> >     reservations is a lot of work, because you'd be adding a yet another
>> >     path to an already overly complex allocator :)
>> >
>> >      >
>> >      > Or another idea would be to limit those reservations to what is
>> >     passed
>> >      > via --socket-limit.
>> >      >
>> >      > Anatoly?
>> >
>> >     I have a patchset in the works that does this and was planning to
>> >     submit
>> >     it to 19.08, but things got in the way and it's still sitting there
>> >     collecting bit rot. This may be reason enough to resurrect it and
>> >     finish
>> >     it up :)
>> >
>> >      >
>> >      >
>> >      >
>> >      > --
>> >      > David Marchand
>> >      >
>> >
>> >
>> >     --
>> >     Thanks,
>> >     Anatoly
>> >
>>
>>
>> --
>> Thanks,
>> Anatoly
>>
>

  reply	other threads:[~2020-02-04 12:07 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-30  7:48 siddarth rai
2020-01-30  8:51 ` David Marchand
2020-01-30 10:47   ` siddarth rai
2020-01-30 13:15     ` Meunier, Julien (Nokia - FR/Paris-Saclay)
2020-01-31 12:14       ` siddarth rai
2020-03-10 15:26     ` David Marchand
2020-02-04 10:23   ` Burakov, Anatoly
2020-02-04 10:55     ` siddarth rai
2020-02-04 11:13       ` Burakov, Anatoly
2020-02-04 11:57         ` siddarth rai
2020-02-04 12:07           ` siddarth rai [this message]
2020-02-04 16:18             ` Burakov, Anatoly
2020-02-11  8:11     ` David Marchand
2020-02-11 10:28       ` Burakov, Anatoly
2020-02-02  9:22 ` David Marchand
2020-02-04 10:20   ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGxAMwAGv2GwdWV93TEcrXhgJ77RdMxm3vT7EOOv1Q549KOPuw@mail.gmail.com \
    --to=siddsr@gmail.com \
    --cc=anatoly.burakov@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).