From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: "Gonzalez Monroy, Sergio" <sergio.gonzalez.monroy@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] libhugetlbfs
Date: Thu, 23 Jul 2015 10:12:21 +0200 [thread overview]
Message-ID: <1504831.JexCQJ5PJA@xps13> (raw)
In-Reply-To: <55B09913.8040100@intel.com>
2015-07-23 08:34, Gonzalez Monroy, Sergio:
> On 22/07/2015 11:40, Thomas Monjalon wrote:
> > Sergio,
> >
> > As the maintainer of memory allocation, would you consider using
> > libhugetlbfs in DPDK for Linux?
> > It may simplify a part of our memory allocator and avoid some potential
> > bugs which would be already fixed in the dedicated lib.
> I did have a look at it a couple of months ago and I thought there were
> a few issues:
> - get_hugepage_region/get_huge_pages only allocates default size huge pages
> (you can set a different default huge page size with environment
> variables but no
> support for multiple sizes) plus we have no guarantee on physically
> contiguous pages.
Speaking about that, we don't always need contiguous pages.
Maybe we should take it into account when reserving memory.
Some flags DMA (locked physical pages that are not swappable) and CONTIGUOUS
may be considered.
> - That leave us with
> hugetlbfs_unlinked_fd/hugetlbfs_unlinked_fd_for_size. These APIs
> wouldn't simplify a lot the current code, just the allocation of the
> pages themselves
> (ie. creating a file in hugetlbfs mount).
> Then there is the issue with multi-process; because they return a
> file descriptor while
> unlinking the file, we would need some sort of Inter-Process
> Communication to pass
> the descriptors to secondary processes.
> - Not a big deal but AFAIK it is not possible to have multiple mount
> points for the same
> hugepage size, and even if you do, hugetlbfs_find_path_for_size
> returns always the
> same path (ie. first found in list).
> - We still need to parse /proc/self/pagemap to get physical address of
> mapped hugepages.
>
> I guess that if we were to push for a new API such as
> hugetlbfs_fd_for_size, we could use
> it for the hugepage allocation, but we still would have to parse
> /proc/self/pagemap to get
> physical address and then order those hugepages.
>
> Thoughts?
Why not extending the API and pushing our code to this lib?
It would allow to share the maintenance.
The same move could be done to libpciaccess.
next prev parent reply other threads:[~2015-07-23 8:13 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-22 10:40 Thomas Monjalon
2015-07-23 7:34 ` Gonzalez Monroy, Sergio
2015-07-23 8:12 ` Thomas Monjalon [this message]
2015-07-23 9:29 ` Gonzalez Monroy, Sergio
2015-07-23 11:47 ` Thomas Monjalon
2015-07-23 12:08 ` Burakov, Anatoly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1504831.JexCQJ5PJA@xps13 \
--to=thomas.monjalon@6wind.com \
--cc=dev@dpdk.org \
--cc=sergio.gonzalez.monroy@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).