DPDK patches and discussions
 help / color / mirror / Atom feed
From: John McNamara <john.mcnamara@intel.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v3] doc: Update doc for vhost sample
Date: Fri, 27 Mar 2015 12:21:43 +0000	[thread overview]
Message-ID: <1427458903-18639-2-git-send-email-john.mcnamara@intel.com> (raw)
In-Reply-To: <1427458903-18639-1-git-send-email-john.mcnamara@intel.com>

From: Ouyang Changchun <changchun.ouyang@intel.com>

Add some documentation updates for vhost sample app.

Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/sample_app_ug/vhost.rst | 55 ++++++++++++++++++++++++++++++++------
 1 file changed, 47 insertions(+), 8 deletions(-)

diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst
index 4a6d434..7254578 100644
--- a/doc/guides/sample_app_ug/vhost.rst
+++ b/doc/guides/sample_app_ug/vhost.rst
@@ -1,3 +1,4 @@
+
 ..  BSD LICENSE
     Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
     All rights reserved.
@@ -640,19 +641,57 @@ To call the QEMU wrapper automatically from libvirt, the following configuration
 Common Issues
 ~~~~~~~~~~~~~
 
-**QEMU failing to allocate memory on hugetlbfs.**
+*   QEMU failing to allocate memory on hugetlbfs::
 
-file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
+       file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
 
-When running QEMU the above error implies that it has failed to allocate memory for the Virtual Machine on the hugetlbfs.
-This is typically due to insufficient hugepages being free to support the allocation request.
-The number of free hugepages can be checked as follows:
+    When running QEMU the above error implies that it has failed to allocate memory for the Virtual Machine on
+    the hugetlbfs. This is typically due to insufficient hugepages being free to support the allocation request.
+    The number of free hugepages can be checked as follows:
 
-.. code-block:: console
+    .. code-block:: console
+
+        user@target:cat /sys/kernel/mm/hugepages/hugepages-<pagesize> / nr_hugepages
+
+    The command above indicates how many hugepages are free to support QEMU's allocation request.
+
+*   User space VHOST work properly with the guest with 2M sized hug pages:
+
+    The guest may have 2M or 1G sized huge pages file, the user space VHOST can work properly in both cases.
+
+*   User space VHOST will not work with QEMU without '-mem-prealloc' option:
+
+    The current implementation work properly only when the guest memory is pre-allocated, so it is required to
+    use the correct QEMU version(e.g. 1.6) which supports '-mem-prealloc'; The option '-mem-prealloc' must be
+    specified explicitly in QEMU command line.
+
+*   User space VHOST will not work with QEMU version without shared memory mapping:
+
+    As shared memory mapping is mandatory for user space VHOST to work properly with the guest as user space VHOST
+    needs access the shared memory from the guest to receive and transmit packets. It is important to make sure
+    the QEMU version used supports shared memory mapping.
+
+*   Using libvirt "virsh create" the qemu-wrap.py spawns a new process to run "qemu-kvm". This impacts the behavior
+    of the "virsh destroy" which kills the process running "qemu-wrap.py" without actually destroying the VM (leaves
+    the "qemu-kvm" process running):
+
+    This following patch can fix this issue:
+        http://dpdk.org/ml/archives/dev/2014-June/003607.html
+
+*   In Ubuntu environment, QEMU fail to start a new guest normally with user space VHOST due to hug pages can't be
+    allocated for the new guest.*
+
+    The solution for this issue could be adding "-boot c" into QEMU command line to make sure the huge pages are
+    allocated properly and then the guest will startup normally.
+
+    Use "cat /proc/meminfo" to check if there is any change in value of HugePages_Total and HugePages_Free after the
+    guest startup.
 
-    user@target:cat /sys/kernel/mm/hugepages/hugepages-<pagesize> / nr_hugepages
+*   Logging message: "eventfd_link: module verification failed: signature and/or required key missing - tainting kernel"*
 
-The command above indicates how many hugepages are free to support QEMU's allocation request.
+    Ignore the above logging message. The message occurs due to the new module eventfd_link, which is not a standard
+    module of Linux, but it is necessary for user space VHOST current implementation(CUSE-based) to communicate with
+    the guest.
 
 Running DPDK in the Virtual Machine
 -----------------------------------
-- 
1.8.1.4

  reply	other threads:[~2015-03-27 12:21 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-02  8:57 [dpdk-dev] [PATCH] " Ouyang Changchun
2015-03-03  1:50 ` [dpdk-dev] [PATCH v2] " Ouyang Changchun
2015-03-27 12:21 ` [dpdk-dev] [PATCH v3] " John McNamara
2015-03-27 12:21   ` John McNamara [this message]
2015-03-27 13:20 ` [dpdk-dev] [PATCH v4] doc: " John McNamara
2015-03-27 13:55   ` Butler, Siobhan A
2015-03-31  1:14     ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1427458903-18639-2-git-send-email-john.mcnamara@intel.com \
    --to=john.mcnamara@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).