From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
To: "Umakiran Godavarthi (ugodavar)" <ugodavar@cisco.com>
Cc: "anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"stephen@networkplumber.org" <stephen@networkplumber.org>
Subject: Re: DPDK 19.11.5 Legacy Memory Design Query
Date: Mon, 10 Oct 2022 18:15:41 +0300 [thread overview]
Message-ID: <20221010181541.2fb4923e@sovereign> (raw)
In-Reply-To: <SJ0PR11MB4846BFEEFAA89FC9695F0097DD529@SJ0PR11MB4846.namprd11.prod.outlook.com>
Hi Umakiran,
Please quote what is needed and reply below the quotes.
2022-09-26 13:06 (UTC+0000), Umakiran Godavarthi (ugodavar):
> Hi Dimitry
>
> We know If the application does unmap, DPDK native heaps are not getting cleaned up in “native: use regular DPDK memory” memory type.
>
> Can we get an API from DPDK community please to clean up the heaps given the VA and length to be freed ? Like below
>
> https://doc.dpdk.org/api/rte__memory_8h.html#afb3c1f8be29fa15953cebdad3a9cd8eb
>
> int rte_extmem_unregister
> (
> void *
> va_addr,
> size_t
> len
> )
>
> Unregister external memory chunk with DPDK.
> Note
> Using this API is mutually exclusive with rte_malloc family of API's.
> This API will not perform any DMA unmapping. It is expected that user will do that themselves.
> Before calling this function, all other processes must call rte_extmem_detach to detach from the memory area.
> Parameters
> va_addr
> Start of virtual area to unregister
> len
> Length of virtual area to unregister
> Returns
> · 0 on success
> · -1 in case of error, with rte_errno set to one of the following: EINVAL - one of the parameters was invalid ENOENT - memory chunk was not
> If we get such API in native way , it will be good for us instead of changing the design and going all the way writing code for new memory type
>
> Please suggest us can we get an API to clean up DPDK NATIVE POOLS if VA_ADDR and LEN is given ?
1. Let's clarify the terms first.
Malloc heaps which store hugepages that an application can allocate.
These are internal structures of the DPDK memory manager.
Correct, unmapping memory without maintaining these structures
leads to the issue you're trying to solve.
Pools are allocated on top of some kind of memory.
Note that "native", "xmem", etc. are specific to TestPMD app.
To DPDK, "native" means memory from the DPDK allocator
(pool memory can also come from outside of DPDK for example).
2. DPDK 19.11 is LTS and will be EOL soon [1].
New APIs are only added to the upstream.
3. A new API needs rationale.
I still don't see why you need legacy mode in your case.
New API also would not fit well into the legacy mode idea:
static memory layout.
In dynamic memory mode, it would be useless, because unneeded pages
are not allocated in the first place and freed once not used anymore.
[1]: https://core.dpdk.org/roadmap/#stable
> From: Umakiran Godavarthi (ugodavar) <ugodavar@cisco.com>
> Date: Monday, 26 September 2022 at 6:25 PM
> To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Cc: anatoly.burakov@intel.com <anatoly.burakov@intel.com>, dev@dpdk.org <dev@dpdk.org>, stephen@networkplumber.org <stephen@networkplumber.org>
> Subject: Re: DPDK 19.11.5 Legacy Memory Design Query
> Thanks @Dmitry Kozlyuk<mailto:dmitry.kozliuk@gmail.com> for your suggestions
>
> I will try the following for DPDK pool creation
>
> My logic of calculating MBUF’s remains same
>
> Saw this code in DPDK testpmd where External heap memory is used
>
> case MP_ALLOC_XMEM_HUGE:
> {
> int heap_socket;
> bool huge = mp_alloc_type == MP_ALLOC_XMEM_HUGE;
>
> if (setup_extmem(nb_mbuf, mbuf_seg_size, huge) < 0)
> rte_exit(EXIT_FAILURE, "Could not create external memory\n");
>
> heap_socket =
> rte_malloc_heap_get_socket(EXTMEM_HEAP_NAME);
> if (heap_socket < 0)
> rte_exit(EXIT_FAILURE, "Could not get external memory socket ID\n");
>
> TESTPMD_LOG(INFO, "preferred mempool ops selected: %s\n",
> rte_mbuf_best_mempool_ops());
> rte_mp = rte_pktmbuf_pool_create(pool_name, nb_mbuf,
> mb_mempool_cache, 0, mbuf_seg_size,
> heap_socket);
> break;
> }
>
> So I will do the same
>
>
> 1. EAL Init
> 2. Calculate MBUF we need for our application
> 3. Then create pool using MP_ALLOC_XEM_HUGE
>
> 1,2,3 should work right , that should avoid heap corruption issues right ?
Yes, this snippet (and relevant functions there) is for your case.
>
> We have total 3 types pool creation
>
> /*
> * Select mempool allocation type:
> * - native: use regular DPDK memory
> * - anon: use regular DPDK memory to create mempool, but populate using
> * anonymous memory (may not be IOVA-contiguous)
> * - xmem: use externally allocated hugepage memory
> */
>
> Instead of Freeing unused virtual memory in DPDK native way, we just create a pool with type XMEM_HUGE and add page by page to it like testpmd code
>
> Please let me know 1,2,3 steps are good right, so we need to boot DPDK with socket mem
>
> --socket-mem 2048 so that DPDK takes only 1 page natively and boots up right ?
--socket-mem is in megabytes, so it's --socket-mem 2
and 2MB hugepages must be available.
Maybe even --no-huge will suit your case
since you allocate all hugepages yourself effectively.
prev parent reply other threads:[~2022-10-10 15:15 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-14 7:30 Umakiran Godavarthi (ugodavar)
2022-09-21 6:50 ` Umakiran Godavarthi (ugodavar)
2022-09-22 8:08 ` Umakiran Godavarthi (ugodavar)
2022-09-22 9:00 ` Dmitry Kozlyuk
2022-09-23 11:20 ` Umakiran Godavarthi (ugodavar)
2022-09-23 11:47 ` Dmitry Kozlyuk
2022-09-23 12:12 ` Umakiran Godavarthi (ugodavar)
2022-09-23 13:10 ` Dmitry Kozlyuk
2022-09-26 12:55 ` Umakiran Godavarthi (ugodavar)
2022-09-26 13:06 ` Umakiran Godavarthi (ugodavar)
2022-10-10 15:15 ` Dmitry Kozlyuk [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221010181541.2fb4923e@sovereign \
--to=dmitry.kozliuk@gmail.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=stephen@networkplumber.org \
--cc=ugodavar@cisco.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).