From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: Bao-Long Tran <tranbaolong@niometrics.com>,
olivier.matz@6wind.com, arybchenko@solarflare.com
Cc: dev@dpdk.org, users@dpdk.org
Subject: Re: [dpdk-users] Memory requirement calculation to create mempools with external memory
Date: Fri, 10 Apr 2020 13:13:57 +0100 [thread overview]
Message-ID: <034f6201-b381-98fd-0637-31f6e2f253e5@intel.com> (raw)
In-Reply-To: <0B83F781-A4DB-4775-95CA-E6C2A6838D72@niometrics.com>
On 04-Apr-20 5:27 AM, Bao-Long Tran wrote:
> Hi,
>
> My goal is to create mbuf pools using external memory with rte_malloc_heap_*().
> The initial implementation goes like this:
> - Create an empty heap rte_malloc_heap_create()
> - Calculate memory required
> - Provision and add the calculated single memory chunk to
> the heap rte_malloc_heap_memory_add()
> - Create mbuf pool from the heap
>
> The problematic step is the calculation step. AFAIK there's no generic way to
> know how much memory is required to create a particular mempool. The example
> I found in testpmd (see below) is quite pessimistic as stated, which leads to
> up to 1G overprovision if you're using 1G hugepages.
>
> I wonder under what circumstances can I make this memory requirement
> calculation deterministic. I.e., let's say I have total control over the
> underlying hugepages, including page size, iova, alignment, contiguousness
> (both iova and va), etc, how can I know exactly how much memory to allocate?
>
> Anatoly, I don't quite understand your comment in the code below about the
> hdr_mem provision in particular. Why do we need to give hdr_mem special
> treatment? Further explanation is much appreciated.
Hi,
When you're creating a mempool, you're not just creating space for mbufs
- you're also reserving space for mempool headers and mempool chunk
areas. The way they are calculated are IIRC specific to the mempool
driver (with default implementation provided), so without knowing the
exact layout of memory and how much memory we'll be using up for them
(i.e. how many headers it'll take, how big is the mbuf, etc.) and
without knowing which specific mempool driver we will be using it is
impossible to predict exact mempool memory usage.
The memchunks will come from the main memory even if you're reserving
from external memory. This is an implementation detail that has to do
with the fact that memchunks are allocated using a regular rte_zmalloc()
call. Perhaps we should change that, and the zmalloc call should be
called on the same socket the mempool sits on, but i don't know the
mempool code well enough to understand the implications of such a change.
>
> Thanks,
> BL
>
> /* extremely pessimistic estimation of memory required to create a mempool */
> static int
> calc_mem_size(uint32_t nb_mbufs, uint32_t mbuf_sz, size_t pgsz, size_t *out)
> {
> unsigned int n_pages, mbuf_per_pg, leftover;
> uint64_t total_mem, mbuf_mem, obj_sz;
>
> /* there is no good way to predict how much space the mempool will
> * occupy because it will allocate chunks on the fly, and some of those
> * will come from default DPDK memory while some will come from our
> * external memory, so just assume 128MB will be enough for everyone.
> */
> uint64_t hdr_mem = 128 << 20;
>
> /* account for possible non-contiguousness */
> obj_sz = rte_mempool_calc_obj_size(mbuf_sz, 0, NULL);
> if (obj_sz > pgsz) {
> TESTPMD_LOG(ERR, "Object size is bigger than page size\n");
> return -1;
> }
>
> mbuf_per_pg = pgsz / obj_sz;
> leftover = (nb_mbufs % mbuf_per_pg) > 0;
> n_pages = (nb_mbufs / mbuf_per_pg) + leftover;
>
> mbuf_mem = n_pages * pgsz;
>
> total_mem = RTE_ALIGN(hdr_mem + mbuf_mem, pgsz);
>
> if (total_mem > SIZE_MAX) {
> TESTPMD_LOG(ERR, "Memory size too big\n");
> return -1;
> }
> *out = (size_t)total_mem;
>
> return 0;
> }
>
--
Thanks,
Anatoly
prev parent reply other threads:[~2020-04-10 12:14 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-04 4:27 Bao-Long Tran
2020-04-10 12:13 ` Burakov, Anatoly [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=034f6201-b381-98fd-0637-31f6e2f253e5@intel.com \
--to=anatoly.burakov@intel.com \
--cc=arybchenko@solarflare.com \
--cc=dev@dpdk.org \
--cc=olivier.matz@6wind.com \
--cc=tranbaolong@niometrics.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).