DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maciej Grochowski <maciej.grochowski@codilime.com>
To: dev@dpdk.org
Subject: [dpdk-dev]  Issues with example/vhost with running VM
Date: Wed, 13 May 2015 17:00:07 +0200	[thread overview]
Message-ID: <CALPCkO_oLvyrt13zcuXeCp+gULKiPyX7Lyxcp6o371JzSkcbCw@mail.gmail.com> (raw)

Hello, I trying to create vm2vm benchmark on my ubuntu(14.04) based
platform.

I had compiled DPDK to run a vhost example app with followed flags

CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_USER=y
CONFIG_RTE_LIBRTE_VHOST_DEBUG=n


then I run vhost app based on documentation:

./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- -p 0x1
--dev-basename usvhost

then I trying to start kvm VM

kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host
-smp 2 -mem-path /mnt/huge -mem-prealloc \
-hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 4096  \
-chardev socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet1,chardev=char1 \
-device
virtio-net-pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet2,chardev=char2 \
-device
virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off

but this give me an error:

qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
chardev "char1" went up
qemu-system-x86_64: unable to map backing store for hugepages: Cannot
allocate memory


On vhost app in logs I can see:

VHOST_DATA: Procesing on Core 1 started
VHOST_DATA: Procesing on Core 2 started
VHOST_DATA: Procesing on Core 3 started
VHOST_CONFIG: socket created, fd:25
VHOST_CONFIG: bind to usvhost
VHOST_CONFIG: new virtio connection is 26
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:27
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:28
VHOST_CONFIG: recvmsg failed
VHOST_CONFIG: vhost peer closed


So that looks at huge page memory problem.
On my machine I had 2048k huge pages, and I can allocate 2479.

before I run vhost "cat /proc/meminfo | grep Huge" show

AnonHugePages:      4096 kB
HugePages_Total:    2479
HugePages_Free:     2479
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

and while running vhost:

AnonHugePages:      4096 kB
HugePages_Total:    2479
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

so that looks that I didn't have free hugepages for my VM. But this looks
as independently if I reserve 1k 2k or 2.5k memory always example/vhost got
whole memory.

Any help will be greatly appreciated

             reply	other threads:[~2015-05-13 15:00 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-13 15:00 Maciej Grochowski [this message]
2015-05-13 17:53 ` Xie, Huawei
2015-05-14 14:16   ` Maciej Grochowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALPCkO_oLvyrt13zcuXeCp+gULKiPyX7Lyxcp6o371JzSkcbCw@mail.gmail.com \
    --to=maciej.grochowski@codilime.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).