DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev]  Issues with example/vhost with running VM
@ 2015-05-13 15:00 Maciej Grochowski
  2015-05-13 17:53 ` Xie, Huawei
  0 siblings, 1 reply; 3+ messages in thread
From: Maciej Grochowski @ 2015-05-13 15:00 UTC (permalink / raw)
  To: dev

Hello, I trying to create vm2vm benchmark on my ubuntu(14.04) based
platform.

I had compiled DPDK to run a vhost example app with followed flags

CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_USER=y
CONFIG_RTE_LIBRTE_VHOST_DEBUG=n


then I run vhost app based on documentation:

./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- -p 0x1
--dev-basename usvhost

then I trying to start kvm VM

kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host
-smp 2 -mem-path /mnt/huge -mem-prealloc \
-hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 4096  \
-chardev socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet1,chardev=char1 \
-device
virtio-net-pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet2,chardev=char2 \
-device
virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off

but this give me an error:

qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
chardev "char1" went up
qemu-system-x86_64: unable to map backing store for hugepages: Cannot
allocate memory


On vhost app in logs I can see:

VHOST_DATA: Procesing on Core 1 started
VHOST_DATA: Procesing on Core 2 started
VHOST_DATA: Procesing on Core 3 started
VHOST_CONFIG: socket created, fd:25
VHOST_CONFIG: bind to usvhost
VHOST_CONFIG: new virtio connection is 26
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:27
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:28
VHOST_CONFIG: recvmsg failed
VHOST_CONFIG: vhost peer closed


So that looks at huge page memory problem.
On my machine I had 2048k huge pages, and I can allocate 2479.

before I run vhost "cat /proc/meminfo | grep Huge" show

AnonHugePages:      4096 kB
HugePages_Total:    2479
HugePages_Free:     2479
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

and while running vhost:

AnonHugePages:      4096 kB
HugePages_Total:    2479
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

so that looks that I didn't have free hugepages for my VM. But this looks
as independently if I reserve 1k 2k or 2.5k memory always example/vhost got
whole memory.

Any help will be greatly appreciated

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Issues with example/vhost with running VM
  2015-05-13 15:00 [dpdk-dev] Issues with example/vhost with running VM Maciej Grochowski
@ 2015-05-13 17:53 ` Xie, Huawei
  2015-05-14 14:16   ` Maciej Grochowski
  0 siblings, 1 reply; 3+ messages in thread
From: Xie, Huawei @ 2015-05-13 17:53 UTC (permalink / raw)
  To: Maciej Grochowski, dev

Try --socket-mem or -m 2048 to limit the vhost switch's memory consumption, note that vswitch requires several GB memory due to some issue in the example, so try allocating more huges pages.

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maciej Grochowski
> Sent: Wednesday, May 13, 2015 11:00 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Issues with example/vhost with running VM
> 
> Hello, I trying to create vm2vm benchmark on my ubuntu(14.04) based
> platform.
> 
> I had compiled DPDK to run a vhost example app with followed flags
> 
> CONFIG_RTE_LIBRTE_VHOST=y
> CONFIG_RTE_LIBRTE_VHOST_USER=y
> CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> 
> 
> then I run vhost app based on documentation:
> 
> ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- -p 0x1
> --dev-basename usvhost
> 
> then I trying to start kvm VM
> 
> kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
> host
> -smp 2 -mem-path /mnt/huge -mem-prealloc \
> -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 4096  \
> -chardev
> socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
> -netdev type=vhost-user,id=hostnet1,chardev=char1 \
> -device
> virtio-net-
> pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=o
> ff,guest_ecn=off
> \
> -chardev
> socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
> -netdev type=vhost-user,id=hostnet2,chardev=char2 \
> -device
> virtio-net-
> pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=o
> ff,guest_ecn=off
> 
> but this give me an error:
> 
> qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
> chardev "char1" went up
> qemu-system-x86_64: unable to map backing store for hugepages: Cannot
> allocate memory
> 
> 
> On vhost app in logs I can see:
> 
> VHOST_DATA: Procesing on Core 1 started
> VHOST_DATA: Procesing on Core 2 started
> VHOST_DATA: Procesing on Core 3 started
> VHOST_CONFIG: socket created, fd:25
> VHOST_CONFIG: bind to usvhost
> VHOST_CONFIG: new virtio connection is 26
> VHOST_CONFIG: new device, handle is 0
> VHOST_CONFIG: read message VHOST_USER_SET_OWNER
> VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> VHOST_CONFIG: vring call idx:0 file:27
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> VHOST_CONFIG: vring call idx:1 file:28
> VHOST_CONFIG: recvmsg failed
> VHOST_CONFIG: vhost peer closed
> 
> 
> So that looks at huge page memory problem.
> On my machine I had 2048k huge pages, and I can allocate 2479.
> 
> before I run vhost "cat /proc/meminfo | grep Huge" show
> 
> AnonHugePages:      4096 kB
> HugePages_Total:    2479
> HugePages_Free:     2479
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> 
> and while running vhost:
> 
> AnonHugePages:      4096 kB
> HugePages_Total:    2479
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> 
> so that looks that I didn't have free hugepages for my VM. But this looks
> as independently if I reserve 1k 2k or 2.5k memory always example/vhost got
> whole memory.
> 
> Any help will be greatly appreciated

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Issues with example/vhost with running VM
  2015-05-13 17:53 ` Xie, Huawei
@ 2015-05-14 14:16   ` Maciej Grochowski
  0 siblings, 0 replies; 3+ messages in thread
From: Maciej Grochowski @ 2015-05-14 14:16 UTC (permalink / raw)
  To: Xie, Huawei; +Cc: dev

Thank You Xie for reply,

When I run vhost with -m 2048 it didn't start, with -m 3000 and host with
1024 give me segfault (I calculated hugepages and have only 3 free)

VHOST_CONFIG: bind to usvhost
VHOST_CONFIG: new virtio connection is 26
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: new virtio connection is 27
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:28
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:29
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:30
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:31
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:32
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:28
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: mapped region 0 fd:29 to 0x2aaaaac00000 sz:0xa0000 off:0x0
VHOST_CONFIG: mapped region 1 fd:33 to 0xffffffffffffffff sz:0x50000000
off:0xc0000
VHOST_CONFIG: mmap qemu guest failed.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
Segmentation fault (core dumped)

While testing different configuration I finally got -m 3584

 ./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem 3712
-- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
so I can run vhost-switch from 3584*2048k Hugepage

-with this vhost user I run two KVM machines with followed parameters

kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host
-smp 2
-hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 1024 -mem-path
/mnt/huge -mem-prealloc
-chardev socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost
-netdev type=vhost-user,id=hostnet1,chardev=char1
-device virtio-net
pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-chardev socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost
-netdev type=vhost-user,id=hostnet2,chardev=char2
-device
virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off

with this i got

...
VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae00000 sz:0xa0000 off:0x0
VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb000000 sz:0x10000000
off:0xc0000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:38
VHOST_CONFIG: virtio isn't ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:39
VHOST_CONFIG: virtio is now ready for processing.
VHOST_DATA: (1) Device has been added to data core 2


So everything looking ok, Thank You Xie.

But I found another Issue, maybe it is something trivial but using options:
--vm2vm 1 (or) 2 --stats 9
it seems that I didn't have connection between VM2VM communication. I set
manually IP for eth0 and eth1:

on 1 VM
ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up
ifconfig eth1 192.168.1.101 netmask 255.255.255.0 up
on 2 VM
ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up
ifconfig eth1 192.168.0.202 netmask 255.255.255.0 up

And I cant ping between any IP/interfaces, moreover stats show me that:

Device statistics ====================================
Statistics for device 0 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
Statistics for device 1 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
Statistics for device 2 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
Statistics for device 3 ------------------------------
TX total:               0
TX dropped:             0
TX successful:          0
RX total:               0
RX dropped:             0
RX successful:          0
======================================================

So it seems like any packet didn't leave my VM.
also arp table is empty on each VM

Do You have any ide what can be wrong with configuration between VM2VM?


On Wed, May 13, 2015 at 7:53 PM, Xie, Huawei <huawei.xie@intel.com> wrote:

> Try --socket-mem or -m 2048 to limit the vhost switch's memory
> consumption, note that vswitch requires several GB memory due to some issue
> in the example, so try allocating more huges pages.
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maciej Grochowski
> > Sent: Wednesday, May 13, 2015 11:00 PM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] Issues with example/vhost with running VM
> >
> > Hello, I trying to create vm2vm benchmark on my ubuntu(14.04) based
> > platform.
> >
> > I had compiled DPDK to run a vhost example app with followed flags
> >
> > CONFIG_RTE_LIBRTE_VHOST=y
> > CONFIG_RTE_LIBRTE_VHOST_USER=y
> > CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> >
> >
> > then I run vhost app based on documentation:
> >
> > ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- -p 0x1
> > --dev-basename usvhost
> >
> > then I trying to start kvm VM
> >
> > kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
> > host
> > -smp 2 -mem-path /mnt/huge -mem-prealloc \
> > -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 4096  \
> > -chardev
> > socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet1,chardev=char1 \
> > -device
> > virtio-net-
> > pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=o
> > ff,guest_ecn=off
> > \
> > -chardev
> > socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet2,chardev=char2 \
> > -device
> > virtio-net-
> > pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=o
> > ff,guest_ecn=off
> >
> > but this give me an error:
> >
> > qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
> > chardev "char1" went up
> > qemu-system-x86_64: unable to map backing store for hugepages: Cannot
> > allocate memory
> >
> >
> > On vhost app in logs I can see:
> >
> > VHOST_DATA: Procesing on Core 1 started
> > VHOST_DATA: Procesing on Core 2 started
> > VHOST_DATA: Procesing on Core 3 started
> > VHOST_CONFIG: socket created, fd:25
> > VHOST_CONFIG: bind to usvhost
> > VHOST_CONFIG: new virtio connection is 26
> > VHOST_CONFIG: new device, handle is 0
> > VHOST_CONFIG: read message VHOST_USER_SET_OWNER
> > VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> > VHOST_CONFIG: vring call idx:0 file:27
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> > VHOST_CONFIG: vring call idx:1 file:28
> > VHOST_CONFIG: recvmsg failed
> > VHOST_CONFIG: vhost peer closed
> >
> >
> > So that looks at huge page memory problem.
> > On my machine I had 2048k huge pages, and I can allocate 2479.
> >
> > before I run vhost "cat /proc/meminfo | grep Huge" show
> >
> > AnonHugePages:      4096 kB
> > HugePages_Total:    2479
> > HugePages_Free:     2479
> > HugePages_Rsvd:        0
> > HugePages_Surp:        0
> > Hugepagesize:       2048 kB
> >
> > and while running vhost:
> >
> > AnonHugePages:      4096 kB
> > HugePages_Total:    2479
> > HugePages_Free:        0
> > HugePages_Rsvd:        0
> > HugePages_Surp:        0
> > Hugepagesize:       2048 kB
> >
> > so that looks that I didn't have free hugepages for my VM. But this looks
> > as independently if I reserve 1k 2k or 2.5k memory always example/vhost
> got
> > whole memory.
> >
> > Any help will be greatly appreciated
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-05-14 14:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-13 15:00 [dpdk-dev] Issues with example/vhost with running VM Maciej Grochowski
2015-05-13 17:53 ` Xie, Huawei
2015-05-14 14:16   ` Maciej Grochowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).