From: "Umakiran Godavarthi (ugodavar)" <ugodavar@cisco.com>
To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Cc: "anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"stephen@networkplumber.org" <stephen@networkplumber.org>
Subject: Re: DPDK 19.11.5 Legacy Memory Design Query
Date: Mon, 26 Sep 2022 12:55:51 +0000 [thread overview]
Message-ID: <SJ0PR11MB48463D47B2E5FADE0405C120DD529@SJ0PR11MB4846.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20220923161057.0205e703@sovereign>
[-- Attachment #1: Type: text/plain, Size: 4495 bytes --]
Thanks @Dmitry Kozlyuk<mailto:dmitry.kozliuk@gmail.com> for your suggestions
I will try the following for DPDK pool creation
My logic of calculating MBUF’s remains same
Saw this code in DPDK testpmd where External heap memory is used
case MP_ALLOC_XMEM_HUGE:
{
int heap_socket;
bool huge = mp_alloc_type == MP_ALLOC_XMEM_HUGE;
if (setup_extmem(nb_mbuf, mbuf_seg_size, huge) < 0)
rte_exit(EXIT_FAILURE, "Could not create external memory\n");
heap_socket =
rte_malloc_heap_get_socket(EXTMEM_HEAP_NAME);
if (heap_socket < 0)
rte_exit(EXIT_FAILURE, "Could not get external memory socket ID\n");
TESTPMD_LOG(INFO, "preferred mempool ops selected: %s\n",
rte_mbuf_best_mempool_ops());
rte_mp = rte_pktmbuf_pool_create(pool_name, nb_mbuf,
mb_mempool_cache, 0, mbuf_seg_size,
heap_socket);
break;
}
So I will do the same
1. EAL Init
2. Calculate MBUF we need for our application
3. Then create pool using MP_ALLOC_XEM_HUGE
1,2,3 should work right , that should avoid heap corruption issues right ?
We have total 3 types pool creation
/*
* Select mempool allocation type:
* - native: use regular DPDK memory
* - anon: use regular DPDK memory to create mempool, but populate using
* anonymous memory (may not be IOVA-contiguous)
* - xmem: use externally allocated hugepage memory
*/
Instead of Freeing unused virtual memory in DPDK native way, we just create a pool with type XMEM_HUGE and add page by page to it like testpmd code
Please let me know 1,2,3 steps are good right, so we need to boot DPDK with socket mem
--socket-mem 2048 so that DPDK takes only 1 page natively and boots up right ?
Thanks
Umakiran
From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Date: Friday, 23 September 2022 at 6:41 PM
To: Umakiran Godavarthi (ugodavar) <ugodavar@cisco.com>
Cc: anatoly.burakov@intel.com <anatoly.burakov@intel.com>, dev@dpdk.org <dev@dpdk.org>, stephen@networkplumber.org <stephen@networkplumber.org>
Subject: Re: DPDK 19.11.5 Legacy Memory Design Query
2022-09-23 12:12 (UTC+0000), Umakiran Godavarthi (ugodavar):
> [Uma] : Yes I agree if free_hp = 400, nr_hp = 252, we are expecting DPDK takes only 252 and keep the remaining pages free in its heap.
> As you have mentioned just boot DPDK with 1 page, and add pages we want later, is this the steps
>
> 1. NR_HP =1 , FREE_HP = 1
> 2. EAL INIT (DPDK boots up with 1 2 MB page)
> 3. What is the API for adding later on pages ? (rte_extmem_*, can you please give the full API details and how to call it with arguments )
Guide:
https://doc.dpdk.org/guides-19.11/prog_guide/env_abstraction_layer.html#support-for-externally-allocated-memory
I recommend reading the entire section about DPDK memory management
since you're going to use an uncommon API
and should understand what's going on.
API (the linked function and those following it):
http://doc.dpdk.org/api-19.11/rte__malloc_8h.html#a2295623c85ba41fe5bf7dce6bf0393d6
http://doc.dpdk.org/api-19.11/rte__memory_8h.html#a653510fb0c58bf63f54708677e3a2eba
> We can do 1,2,3 there is a problem once we reduce pages to 1 , kernel will free the huge pages totally
>
> So is there a way not to touch NR_HP, FREE_HP, and just pass arguments to boot DPDK with just 1 page ? Please let us know and later add pages we need to DPDK !!
See --socket-mem EAL option:
http://doc.dpdk.org/guides-19.11/linux_gsg/linux_eal_parameters.html#id3
> Why do you need legacy mode in the first place?
> Looks like you're painfully trying to achieve the same result
> that dynamic mode would give you automatically.
>
> [Uma] : Yes , we can’t avoid legacy memory design as secondary process mapped page by page to primary process , and physical addr space is same for both processes. We have to stick to legacy memory design only for now !!
Sorry, I still don't understand.
Virtual and physical addresses of DPDK memory are the same across processes
in both legacy and dynamic memory mode.
[-- Attachment #2: Type: text/html, Size: 15482 bytes --]
next prev parent reply other threads:[~2022-09-26 12:55 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-14 7:30 Umakiran Godavarthi (ugodavar)
2022-09-21 6:50 ` Umakiran Godavarthi (ugodavar)
2022-09-22 8:08 ` Umakiran Godavarthi (ugodavar)
2022-09-22 9:00 ` Dmitry Kozlyuk
2022-09-23 11:20 ` Umakiran Godavarthi (ugodavar)
2022-09-23 11:47 ` Dmitry Kozlyuk
2022-09-23 12:12 ` Umakiran Godavarthi (ugodavar)
2022-09-23 13:10 ` Dmitry Kozlyuk
2022-09-26 12:55 ` Umakiran Godavarthi (ugodavar) [this message]
2022-09-26 13:06 ` Umakiran Godavarthi (ugodavar)
2022-10-10 15:15 ` Dmitry Kozlyuk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SJ0PR11MB48463D47B2E5FADE0405C120DD529@SJ0PR11MB4846.namprd11.prod.outlook.com \
--to=ugodavar@cisco.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=dmitry.kozliuk@gmail.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).