From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
To: "Umakiran Godavarthi (ugodavar)" <ugodavar@cisco.com>
Cc: "anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"stephen@networkplumber.org" <stephen@networkplumber.org>
Subject: Re: DPDK 19.11.5 Legacy Memory Design Query
Date: Fri, 23 Sep 2022 16:10:57 +0300 [thread overview]
Message-ID: <20220923161057.0205e703@sovereign> (raw)
In-Reply-To: <SJ0PR11MB484605E55A5604175621DA6ADD519@SJ0PR11MB4846.namprd11.prod.outlook.com>
2022-09-23 12:12 (UTC+0000), Umakiran Godavarthi (ugodavar):
> [Uma] : Yes I agree if free_hp = 400, nr_hp = 252, we are expecting DPDK takes only 252 and keep the remaining pages free in its heap.
> As you have mentioned just boot DPDK with 1 page, and add pages we want later, is this the steps
>
> 1. NR_HP =1 , FREE_HP = 1
> 2. EAL INIT (DPDK boots up with 1 2 MB page)
> 3. What is the API for adding later on pages ? (rte_extmem_*, can you please give the full API details and how to call it with arguments )
Guide:
https://doc.dpdk.org/guides-19.11/prog_guide/env_abstraction_layer.html#support-for-externally-allocated-memory
I recommend reading the entire section about DPDK memory management
since you're going to use an uncommon API
and should understand what's going on.
API (the linked function and those following it):
http://doc.dpdk.org/api-19.11/rte__malloc_8h.html#a2295623c85ba41fe5bf7dce6bf0393d6
http://doc.dpdk.org/api-19.11/rte__memory_8h.html#a653510fb0c58bf63f54708677e3a2eba
> We can do 1,2,3 there is a problem once we reduce pages to 1 , kernel will free the huge pages totally
>
> So is there a way not to touch NR_HP, FREE_HP, and just pass arguments to boot DPDK with just 1 page ? Please let us know and later add pages we need to DPDK !!
See --socket-mem EAL option:
http://doc.dpdk.org/guides-19.11/linux_gsg/linux_eal_parameters.html#id3
> Why do you need legacy mode in the first place?
> Looks like you're painfully trying to achieve the same result
> that dynamic mode would give you automatically.
>
> [Uma] : Yes , we can’t avoid legacy memory design as secondary process mapped page by page to primary process , and physical addr space is same for both processes. We have to stick to legacy memory design only for now !!
Sorry, I still don't understand.
Virtual and physical addresses of DPDK memory are the same across processes
in both legacy and dynamic memory mode.
next prev parent reply other threads:[~2022-09-23 13:11 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-14 7:30 Umakiran Godavarthi (ugodavar)
2022-09-21 6:50 ` Umakiran Godavarthi (ugodavar)
2022-09-22 8:08 ` Umakiran Godavarthi (ugodavar)
2022-09-22 9:00 ` Dmitry Kozlyuk
2022-09-23 11:20 ` Umakiran Godavarthi (ugodavar)
2022-09-23 11:47 ` Dmitry Kozlyuk
2022-09-23 12:12 ` Umakiran Godavarthi (ugodavar)
2022-09-23 13:10 ` Dmitry Kozlyuk [this message]
2022-09-26 12:55 ` Umakiran Godavarthi (ugodavar)
2022-09-26 13:06 ` Umakiran Godavarthi (ugodavar)
2022-10-10 15:15 ` Dmitry Kozlyuk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220923161057.0205e703@sovereign \
--to=dmitry.kozliuk@gmail.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=stephen@networkplumber.org \
--cc=ugodavar@cisco.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).