From: "Stojaczyk, DariuszX" <dariuszx.stojaczyk@intel.com>
To: "dev@dpdk.org" <dev@dpdk.org>,
"Tan, Jianfeng" <jianfeng.tan@intel.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
"Burakov, Anatoly" <anatoly.burakov@intel.com>,
Yuanhan Liu <yliu@fridaylinux.org>
Cc: "Harris, James R" <james.r.harris@intel.com>,
Thomas Monjalon <thomas@monjalon.net>
Subject: [dpdk-dev] virtio with 2MB hugepages - bringing back single file segments
Date: Thu, 1 Mar 2018 22:40:36 +0000 [thread overview]
Message-ID: <FBE7E039FA50BF47A673AD0BD3CD56A83759772C@HASMSX105.ger.corp.intel.com> (raw)
Hi,
I'm trying to make a vhost-user initiator built upon DPDK work with 2MB hugepages. In the initiator we have to share all memory with the host process, so it
can perform DMA. DPDK currently enforces having one descriptor per hugepage and there's an artificial limit of shared descriptors in DPDK vhost-user implementation (currently 8). Because of that, all DPDK vhost-user initiators are practically limited to 1GB hugepages at the moment. We can always increase the artificial descriptor limit, but then we're limited by sendmsg() itself, which on Linux accepts no more than 253 descriptors. However, could we increase the vhost-user implementation limit to - say - 128, and bring back "single file segments" [1]?
Could I send a patch series that does this? The single file segments code would go through a cleanup - at least making it available via a runtime option rather than #ifdefs.
I know there's an ongoing rework of the memory allocator in DPDK [2] and it includes a similar single file segments functionality. However, it will probably take quite some time before merged and even then, the new functionality would only be available in the *new* allocator. The old one is kept unchanged. It could use single file segments as well.
Regards,
D.
[1] http://dpdk.org/dev/patchwork/patch/16042/
[2] http://dpdk.org/ml/archives/dev/2017-December/084302.html
next reply other threads:[~2018-03-01 22:40 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-01 22:40 Stojaczyk, DariuszX [this message]
2018-03-02 2:36 ` Tan, Jianfeng
2018-03-02 7:33 ` Maxime Coquelin
2018-03-02 9:03 ` Stojaczyk, DariuszX
2018-03-02 10:00 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=FBE7E039FA50BF47A673AD0BD3CD56A83759772C@HASMSX105.ger.corp.intel.com \
--to=dariuszx.stojaczyk@intel.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=james.r.harris@intel.com \
--cc=jianfeng.tan@intel.com \
--cc=maxime.coquelin@redhat.com \
--cc=thomas@monjalon.net \
--cc=yliu@fridaylinux.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).