DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: jianmingfan <jianmingfan@126.com>, dev@dpdk.org
Cc: Jianming Fan <fanjianming@jd.com>
Subject: Re: [dpdk-dev] [PATCH v2] mem: accelerate dpdk program startup by reuse page from page cache
Date: Fri, 9 Nov 2018 12:20:59 +0000	[thread overview]
Message-ID: <6a7fcafc-5ac7-417a-fea6-2fd33b8b6c90@intel.com> (raw)
In-Reply-To: <20181109092338.30097-1-jianmingfan@126.com>

On 09-Nov-18 9:23 AM, jianmingfan wrote:
> --- fix coding style of the previous patch
> 
> During procless startup, dpdk invokes clear_hugedir() to unlink all
> hugepage files under /dev/hugepages. Then in map_all_hugepages(),
> it invokes mmap to allocate and zero all the huge pages as configured
> in /sys/kernel/mm/hugepages/xxx/nr_hugepages.
> 
> This cause startup process extreamly slow with large size of huge page
> configured.
> 
> In our use case, we usually configure as large as 200GB hugepages in our
> router. It takes more than 50s each time dpdk process startup to clear
> the pages.
> 
> To address this issue, user can turn on --reuse-map switch. With it,
> dpdk will check the validity of the exiting page cache under
> /dev/hugespages. If valid, the cache will be reused not deleted,
> so that the os doesn't need to zero the pages again.
> 
> However, as there are a lot of users ,e.g. rte_kni_alloc, rely on the
> os zeor page behavior. To keep things work, I add memset during
> malloc_heap_alloc(). This makes sense due to the following reason.
> 1) user often configure hugepage size too large to be used by the program.
> In our router, 200GB is configured, but less than 2GB is actually used.
> 2) dpdk users don't call heap allocation in performance-critical path.
> They alloc memory during process bootup.
> 
> Signed-off-by: Jianming Fan <fanjianming@jd.com>
> ---

I believe this issue is better solved by actually fixing all of the 
memory that DPDK leaves behind. We already have rte_eal_cleanup() call 
which will deallocate any EAL-allocated memory that have been reserved, 
and an exited application should free any memory it was using so that 
memory subsystem could free it back to the system, thereby not needing 
any cleaning of hugepages at startup.

If your application does not e.g. free its mempools on exit, it should 
:) Chances are, the problem will go away. The only circumstance where 
this may not work is if you preallocated your memory using 
-m/--socket-mem flag.

-- 
Thanks,
Anatoly

  reply	other threads:[~2018-11-09 12:21 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-09  7:58 [dpdk-dev] [PATCH] " jianmingfan
2018-11-09  9:23 ` [dpdk-dev] [PATCH v2] " jianmingfan
2018-11-09 12:20   ` Burakov, Anatoly [this message]
2018-11-09 14:03     ` Burakov, Anatoly
2018-11-09 16:21       ` Stephen Hemminger
2018-11-11  2:19       ` [dpdk-dev] 答复: " 范建明
2018-11-12  9:04         ` Burakov, Anatoly
2018-11-11  2:22       ` [dpdk-dev] " 建明

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6a7fcafc-5ac7-417a-fea6-2fd33b8b6c90@intel.com \
    --to=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=fanjianming@jd.com \
    --cc=jianmingfan@126.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).