From: Bruce Richardson <bruce.richardson@intel.com>
To: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Cc: dev@dpdk.org, Anatoly Burakov <anatoly.burakov@intel.com>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
David Marchand <david.marchand@redhat.com>,
Thomas Monjalon <thomas@monjalon.net>,
Lior Margalit <lmargalit@nvidia.com>
Subject: Re: [PATCH v1 0/6] Fast restart with many hugepages
Date: Mon, 17 Jan 2022 16:40:45 +0000 [thread overview]
Message-ID: <YeWcDZ8y3voGoS1t@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <20220117080801.481568-1-dkozlyuk@nvidia.com>
On Mon, Jan 17, 2022 at 10:07:55AM +0200, Dmitry Kozlyuk wrote:
> This patchset is a new design and implementation of [1].
> Changes since RFC:
> * Fix bugs with -m and --single-file-segments.
> * Reject optimization of mmap() call number (see below).
>
> # Problem Statement
>
> Large allocations that involve mapping new hugepages are slow.
> This is problematic, for example, in the following use case.
> A single-process application allocates ~1TB of mempools at startup.
> Sometimes the app needs to restart as quick as possible.
> Allocating the hugepages anew takes as long as 15 seconds,
> while the new process could just pick up all the memory
> left by the old one (reinitializing the contents as needed).
>
> Almost all of mmap(2) time spent in the kernel
> is clearing the memory, i.e. filling it with zeros.
> This is done if a file in hugetlbfs is mapped
> for the first time system-wide, i.e. a hugepage is committed
> to prevent data leaks from the previous users of the same hugepage.
> For example, mapping 32 GB from a new file may take 2.16 seconds,
> while mapping the same pages again takes only 0.3 ms.
> Security put aside, e.g. when the environment is controlled,
> this effort is wasted for the memory intended for DMA,
> because its content will be overwritten anyway.
>
> Linux EAL explicitly removes hugetlbfs files at initialization
> and before mapping to force the kernel clear the memory.
> This allows the memory allocator to clean memory on only on freeing.
>
> # Solution
>
> Add a new mode allowing EAL to remap existing hugepage files.
> While it is intended to make restarts faster in the first place,
> it makes any startup faster except the cold one
> (with no existing files).
>
> It is the administrator who accepts security risks
> implied by reusing hugepages.
> The new mode is an opt-in and a warning is logged.
>
> The feature is Linux-only as it is related
> to mapping hugepages from files which only Linux does.
> It is inherently incompatible with --in-memory,
> for --huge-unlink see below.
>
> There is formally no breakage of API contract,
> but there is a behavior change in the new mode:
> rte_malloc*() and rte_memzone_reserve*() may return dirty memory
> (previously they were returning clean memory from free heap elements).
> Their contract has always explicitly allowed this,
> but still there may be users relying on the traditional behavior.
> Such users will need to fix their code to use the new mode.
>
> # Implementation
>
> ## User Interface
>
> There is --huge-unlink switch in the same area to remove hugepage files
> before mapping them. It is infeasible to use with the new mode,
> because the point is to keep hugepage files for fast future restarts.
> Extend --huge-unlink option to represent only valid combinations:
>
> * --huge-unlink=existing OR no option (for compatibility):
> unlink files at initialization
> and before opening them as a precaution.
>
> * --huge-unlink=always OR just --huge-unlink (for compatibility):
> same as above + unlink created files before mapping.
>
> * --huge-unlink=never:
> the new mode, do not unlink hugepages files, reuse them.
>
> This option was always Linux-only, but it is kept as common
> in case there are users who expect it to be a no-op on other systems.
> (Adding a separate --huge-reuse option was also considered,
> but there is no obvious benefit and more combinations to test.)
>
> ## EAL
>
> If a memseg is mapped dirty, it is marked with RTE_MEMSEG_FLAG_DIRTY
> so that the memory allocator may clear the memory if need be.
> See patch 5/6 description for details how this is done
> in different memory mapping modes.
>
> The memory manager tracks whether an element is clean or dirty.
> If rte_zmalloc*() allocates from a dirty element,
> the memory is cleared before handling it to the user.
> On freeing, the allocator joins adjacent free elements,
> but in the new mode it may not be feasible to clear the free memory
> if the joint element is dirty (contains dirty parts).
> In any case, memory will be cleared only once,
> either on freeing or on allocation.
> See patch 3/6 for details.
> Patch 2/6 adds a benchmark to see how time is distributed
> between allocation and freeing in different modes.
>
> Besides clearing memory, each mmap() call takes some time.
> For example, 1024 calls for 1 TB may take ~300 ms.
> The time of one call mapping N hugepages is O(N),
> because inside the kernel hugepages are allocated ony by one.
> Syscall overhead is negligeable even for one page.
> Hence, it does not make sense to reduce the number of mmap() calls,
> which would essentially move the loop over pages into the kernel.
>
> [1]: http://inbox.dpdk.org/dev/20211011085644.2716490-3-dkozlyuk@nvidia.com/
>
Hi,
this seems really interesting, but in the absense of TB of memory being
used, is it easily possible to see the benefits of this work? I've been
playing with adding large memory allocations to helloworld example and
checking the runtime. Allocating 1GB using malloc per thread seems to show
a small (<0.5 second at most) benefit, and using a fixed 10GB allocation
using memzone_reserve at startup shows runtimes within the margin of error
when run with --huge-unlink=existing vs huge-unlink=never. At what size of
memory footprint is it expected to make a clear improvement?
Thanks,
/Bruce
next prev parent reply other threads:[~2022-01-17 16:40 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-30 14:37 [RFC PATCH " Dmitry Kozlyuk
2021-12-30 14:37 ` [RFC PATCH 1/6] doc: add hugepage mapping details Dmitry Kozlyuk
2021-12-30 14:37 ` [RFC PATCH 2/6] mem: add dirty malloc element support Dmitry Kozlyuk
2021-12-30 14:37 ` [RFC PATCH 3/6] eal: refactor --huge-unlink storage Dmitry Kozlyuk
2021-12-30 14:37 ` [RFC PATCH 4/6] eal/linux: allow hugepage file reuse Dmitry Kozlyuk
2021-12-30 14:48 ` [RFC PATCH 5/6] eal: allow hugepage file reuse with --huge-unlink Dmitry Kozlyuk
2021-12-30 14:49 ` [RFC PATCH 6/6] app/test: add allocator performance benchmark Dmitry Kozlyuk
2022-01-17 8:07 ` [PATCH v1 0/6] Fast restart with many hugepages Dmitry Kozlyuk
2022-01-17 8:07 ` [PATCH v1 1/6] doc: add hugepage mapping details Dmitry Kozlyuk
2022-01-17 9:20 ` Thomas Monjalon
2022-01-17 8:07 ` [PATCH v1 2/6] app/test: add allocator performance benchmark Dmitry Kozlyuk
2022-01-17 15:47 ` Bruce Richardson
2022-01-17 15:51 ` Bruce Richardson
2022-01-19 21:12 ` Dmitry Kozlyuk
2022-01-20 9:04 ` Bruce Richardson
2022-01-17 16:06 ` Aaron Conole
2022-01-17 8:07 ` [PATCH v1 3/6] mem: add dirty malloc element support Dmitry Kozlyuk
2022-01-17 14:07 ` Thomas Monjalon
2022-01-17 8:07 ` [PATCH v1 4/6] eal: refactor --huge-unlink storage Dmitry Kozlyuk
2022-01-17 14:10 ` Thomas Monjalon
2022-01-17 8:14 ` [PATCH v1 5/6] eal/linux: allow hugepage file reuse Dmitry Kozlyuk
2022-01-17 14:24 ` Thomas Monjalon
2022-01-17 8:14 ` [PATCH v1 6/6] eal: extend --huge-unlink for " Dmitry Kozlyuk
2022-01-17 14:27 ` Thomas Monjalon
2022-01-17 16:40 ` Bruce Richardson [this message]
2022-01-19 21:12 ` [PATCH v1 0/6] Fast restart with many hugepages Dmitry Kozlyuk
2022-01-20 9:05 ` Bruce Richardson
2022-01-19 21:09 ` [PATCH v2 " Dmitry Kozlyuk
2022-01-19 21:09 ` [PATCH v2 1/6] doc: add hugepage mapping details Dmitry Kozlyuk
2022-01-27 13:59 ` Bruce Richardson
2022-01-19 21:09 ` [PATCH v2 2/6] app/test: add allocator performance benchmark Dmitry Kozlyuk
2022-01-19 21:09 ` [PATCH v2 3/6] mem: add dirty malloc element support Dmitry Kozlyuk
2022-01-19 21:09 ` [PATCH v2 4/6] eal: refactor --huge-unlink storage Dmitry Kozlyuk
2022-01-19 21:11 ` [PATCH v2 5/6] eal/linux: allow hugepage file reuse Dmitry Kozlyuk
2022-01-19 21:11 ` [PATCH v2 6/6] eal: extend --huge-unlink for " Dmitry Kozlyuk
2022-01-27 12:07 ` [PATCH v2 0/6] Fast restart with many hugepages Bruce Richardson
2022-02-02 14:12 ` Thomas Monjalon
2022-02-02 21:54 ` David Marchand
2022-02-03 10:26 ` David Marchand
2022-02-03 18:13 ` [PATCH v3 " Dmitry Kozlyuk
2022-02-03 18:13 ` [PATCH v3 1/6] doc: add hugepage mapping details Dmitry Kozlyuk
2022-02-08 15:28 ` Burakov, Anatoly
2022-02-03 18:13 ` [PATCH v3 2/6] app/test: add allocator performance benchmark Dmitry Kozlyuk
2022-02-08 16:20 ` Burakov, Anatoly
2022-02-03 18:13 ` [PATCH v3 3/6] mem: add dirty malloc element support Dmitry Kozlyuk
2022-02-08 16:36 ` Burakov, Anatoly
2022-02-03 18:13 ` [PATCH v3 4/6] eal: refactor --huge-unlink storage Dmitry Kozlyuk
2022-02-08 16:39 ` Burakov, Anatoly
2022-02-03 18:13 ` [PATCH v3 5/6] eal/linux: allow hugepage file reuse Dmitry Kozlyuk
2022-02-08 17:05 ` Burakov, Anatoly
2022-02-03 18:13 ` [PATCH v3 6/6] eal: extend --huge-unlink for " Dmitry Kozlyuk
2022-02-08 17:14 ` Burakov, Anatoly
2022-02-08 20:40 ` [PATCH v3 0/6] Fast restart with many hugepages David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YeWcDZ8y3voGoS1t@bricha3-MOBL.ger.corp.intel.com \
--to=bruce.richardson@intel.com \
--cc=anatoly.burakov@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=dkozlyuk@nvidia.com \
--cc=lmargalit@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).