From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: siddarth rai <siddsr@gmail.com>
Cc: David Marchand <david.marchand@redhat.com>, dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] Big spike in DPDK VSZ
Date: Tue, 4 Feb 2020 16:18:47 +0000 [thread overview]
Message-ID: <bdb6d0b8-b8fa-6a98-b1d7-cbf54d4c199e@intel.com> (raw)
In-Reply-To: <CAGxAMwAGv2GwdWV93TEcrXhgJ77RdMxm3vT7EOOv1Q549KOPuw@mail.gmail.com>
On 04-Feb-20 12:07 PM, siddarth rai wrote:
> Hi Anatoly,
>
> You mentioned that the maximum size of mempool is limited.
> Can you tell what is the limit and where is it specified?
>
> Regards,
> Siddarth
>
The mempool size itself isn't limited.
However, due to the nature of how memory subsystem works, there is
always an upper limit to how much memory you can reserve (because once
you run out of pre-reserved address space, you will effectively run out
of memory).
Given that the current hardcoded upper limit for reserved address space
is 128GB, this is effectively the limit you are referring to - you won't
be able to create a mempool larger than 128GB because there is not
enough reserved address space in which to put a bigger mempool.
The same applies to any other memory allocation, with a caveat that
while a mempool can be non-contiguous in terms of VA memory (i.e.
consist of several discontiguous VA areas), a single unit of memory
allocation (i.e. a call to rte_malloc or rte_memzone_create) can only be
as big as the largest contiguous chunk of address space.
Given that the current hardcoded limit is 16GB for 2M pages, and 32GB
for 1GB pages, each allocation can be at most 16GB or 32GB long,
depending on the page size from which you are allocating.
So, on a system with 1G and 2M pages, a mempool can only be as big as
128GB, and each individual chunk of memory in that mempool can only be
as big as 16GB for 2M pages, or 32GB for 1G pages.
These are big numbers, so in practice no one hits these limitations.
Again, this is just the price you have to pay for supporting dynamic
memory allocation in secondary processes. There is simply no other way
to guarantee that all shared memory will reside in the same address
space in all processes.
Notably, device hotplug doesn't provide such guarantee, which is why
device hotplug (or even initialization) can fail in a secondary process.
However, in device hotplug i suppose this is acceptable (or the
community thinks it is). In the memory subsystem, i chose to be
conservative and to always guarantee correctness, at a cost of placing
an upper limit on memory allocations.
--
Thanks,
Anatoly
next prev parent reply other threads:[~2020-02-04 16:18 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-30 7:48 siddarth rai
2020-01-30 8:51 ` David Marchand
2020-01-30 10:47 ` siddarth rai
2020-01-30 13:15 ` Meunier, Julien (Nokia - FR/Paris-Saclay)
2020-01-31 12:14 ` siddarth rai
2020-03-10 15:26 ` David Marchand
2020-02-04 10:23 ` Burakov, Anatoly
2020-02-04 10:55 ` siddarth rai
2020-02-04 11:13 ` Burakov, Anatoly
2020-02-04 11:57 ` siddarth rai
2020-02-04 12:07 ` siddarth rai
2020-02-04 16:18 ` Burakov, Anatoly [this message]
2020-02-11 8:11 ` David Marchand
2020-02-11 10:28 ` Burakov, Anatoly
2020-02-02 9:22 ` David Marchand
2020-02-04 10:20 ` Burakov, Anatoly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bdb6d0b8-b8fa-6a98-b1d7-cbf54d4c199e@intel.com \
--to=anatoly.burakov@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=siddsr@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).