DPDK patches and discussions
 help / color / mirror / Atom feed
From: 王志克 <wangzhike@jd.com>
To: "Tan, Jianfeng" <jianfeng.tan@intel.com>,
	"users@dpdk.org" <users@dpdk.org>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] long initialization of rte_eal_hugepage_init
Date: Wed, 6 Sep 2017 06:02:47 +0000	[thread overview]
Message-ID: <6DAF063A35010343823807B082E5681F1A72FE0F@mbx05.360buyAD.local> (raw)
In-Reply-To: <ED26CBA2FAD1BF48A8719AEF02201E36512B1638@SHSMSX103.ccr.corp.intel.com>

Do you mean "pagesize" when you say "size" option? I have specified the pagesize as 1G.
Also, I already use "--socket-mem " to specify that the application only needs 1G per NUMA node.

The problem is that map_all_hugepages() would map all free huge pages, and then select the proper ones. If I have 500 free huge pages (each 1G), and application only needs 1G per NUMA socket, it is unreasonable for such mapping.

My use case is OVS+DPDK. The OVS+DPDK would only need 2G, and other application (Qemu/VM) would use the other huge pages.

Br,
Wang Zhike


-----Original Message-----
From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com] 
Sent: Wednesday, September 06, 2017 12:36 PM
To: 王志克; users@dpdk.org; dev@dpdk.org
Subject: RE: long initialization of rte_eal_hugepage_init



> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of ???
> Sent: Wednesday, September 6, 2017 11:25 AM
> To: users@dpdk.org; dev@dpdk.org
> Subject: [dpdk-users] long initialization of rte_eal_hugepage_init
> 
> Hi All,
> 
> I observed that rte_eal_hugepage_init() will take quite long time if there are
> lots of huge pages. Example I have 500 1G huge pages, and it takes about 2
> minutes. That is too long especially for application restart case.
> 
> If the application only needs limited huge page while the host have lots of
> huge pages, the algorithm is not so efficent. Example, we only need 1G
> memory from each socket.
> 
> What is the proposal from DPDK community? Any solution?

You can mount hugetlbfs with "size" option + use "--socket-mem" option in DPDK to restrict the memory to be used. 

Thanks,
Jianfeng

> 
> Note I tried version dpdk 16.11.
> 
> Br,
> Wang Zhike

  reply	other threads:[~2017-09-06  6:02 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-06  3:24 王志克
2017-09-06  4:24 ` Stephen Hemminger
2017-09-06  6:45   ` 王志克
2017-09-06  4:35 ` Pavan Nikhilesh Bhagavatula
2017-09-06  7:37   ` Sergio Gonzalez Monroy
2017-09-06  8:59     ` 王志克
2017-09-06  4:36 ` Tan, Jianfeng
2017-09-06  6:02   ` 王志克 [this message]
2017-09-06  7:17     ` Tan, Jianfeng
2017-09-06  8:58       ` 王志克

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6DAF063A35010343823807B082E5681F1A72FE0F@mbx05.360buyAD.local \
    --to=wangzhike@jd.com \
    --cc=dev@dpdk.org \
    --cc=jianfeng.tan@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).