From: "Mattias Rönnblom" <hofors@lysator.liu.se>
To: "Morten Brørup" <mb@smartsharesystems.com>,
dev@dpdk.org, "Mattias Rönnblom" <mattias.ronnblom@ericsson.com>
Cc: David Marchand <david.marchand@redhat.com>,
thomas@monjalon.net,
Bruce Richardson <bruce.richardson@intel.com>,
Stephen Hemminger <stephen@networkplumber.org>,
Chengwen Feng <fengchengwen@huawei.com>,
Konstantin Ananyev <konstantin.ananyev@huawei.com>
Subject: Re: [PATCH] config: limit lcore variable maximum size to 4k
Date: Mon, 11 Nov 2024 08:22:46 +0100 [thread overview]
Message-ID: <be04aae3-5f31-4288-80f3-90c539b9e30f@lysator.liu.se> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35E9F8B8@smartserver.smartshare.dk>
On 2024-11-09 00:52, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>> Sent: Friday, 8 November 2024 23.23
>>
>> On 2024-11-08 20:53, Morten Brørup wrote:
>>>> From: Morten Brørup [mailto:mb@smartsharesystems.com]
>>>> Sent: Friday, 8 November 2024 19.35
>>>>
>>>>> From: David Marchand [mailto:david.marchand@redhat.com]
>>>>> Sent: Friday, 8 November 2024 19.18
>>>>>
>>>>> OVS locks all pages to avoid page faults while processing packets.
>>>
>>> It sounds smart, so I just took a look at how it does this. I'm not
>> sure, but it seems like it only locks pages that are actually mapped
>> (current and future).
>>>
>>
>> mlockall(MLOCK_CURRENT) will bring in the whole BSS, it seems. Plus all
>> the rest like unused parts of the execution stacks, the data section
>> and
>> unused code (text) in the binary and all libraries it has linked to.
>>
>> It makes a simple (e.g., a unit test) DPDK 24.07 program use ~33x more
>> residential memory. After lcore variables, the same MLOCK_CURRENT-ed
>> program is ~30% larger than before. So, a relatively modest increase.
>
> Thank you for testing this, Mattias.
> What are the absolute numbers, i.e. in KB, to get an idea of the numbers I should be looking for?
>
Hello world type program with static linking. Default DPDK config. x86_64.
DPDK version MAX_LCORE_VAR EAL params mlock RSS [MB]
22.11 - --no-huge -m 1000 no 22
24.11 1048576 --no-huge -m 1000 no 22
24.11 1048576 --no-huge -m 1000 yes 1576
24.11 4096 --no-huge -m 1000 yes 1445
22.11 - - yes 333*
24.11 1048576 - yes 542*
24.11 4096 - yes 411*
* Excluding huge pages
If you are more selective what libraries you bring in, the footprint
will be lower. How large a fraction is effectively unavoidable, I don't
know. The relative increase will depends on how much memory the
application uses, obviously. The hello world app doesn't have any
app-level state.
> I wonder why the footprint grows at all... Intuitively the same variables should consume approximately the same amount of RAM, regardless how they are allocated.
> Speculating...
lcore variables use malloc(), which in turn does not bring in memory
pages unless they are needed. Much of the lcore buffer will be unused,
and not resident. I covered this, including some example calculation of
the space savings, in an earlier thread. It may be in the programmer's
guide as well, I don't remember.
> The lcore_states were allocated through rte_calloc() and thus used some space in the already allocated hugepages, so they didn't add more pages to the footprint. But they do when allocated and initialized as lcore variables.
> The first lcore variable allocated/initialized uses RTE_MAX_LCORE (128) pages of 4 KB each = 512 KB total. It seems unlikely that adding 512 KB increases the footprint by 30 %.
>
mlockall() brings in all currently-untouched malloc()ed pages, growing
the set of residential pages.
>>
>> The numbers are less drastic, obviously, for many real-world programs,
>> which have large packet pools and other memory hogs.
>
> Agree.
> However, it would be good to understand why switching to lcore variables has this effect on the footprint when using mlockall() like OVS.
>
next prev parent reply other threads:[~2024-11-11 7:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-08 18:17 David Marchand
2024-11-08 18:35 ` Morten Brørup
2024-11-08 19:53 ` Morten Brørup
2024-11-08 22:13 ` Thomas Monjalon
2024-11-08 22:34 ` Mattias Rönnblom
2024-11-08 23:11 ` Thomas Monjalon
2024-11-11 6:31 ` Mattias Rönnblom
2024-11-08 22:49 ` Morten Brørup
2024-11-08 22:23 ` Mattias Rönnblom
2024-11-08 23:52 ` Morten Brørup
2024-11-11 7:22 ` Mattias Rönnblom [this message]
2024-11-11 16:54 ` Stephen Hemminger
2024-11-08 22:02 ` Mattias Rönnblom
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=be04aae3-5f31-4288-80f3-90c539b9e30f@lysator.liu.se \
--to=hofors@lysator.liu.se \
--cc=bruce.richardson@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=fengchengwen@huawei.com \
--cc=konstantin.ananyev@huawei.com \
--cc=mattias.ronnblom@ericsson.com \
--cc=mb@smartsharesystems.com \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).