From: "Gaohaifeng (A)" <gaohaifeng.gao@huawei.com>
To: Maciej Grochowski <maciej.grochowski@codilime.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: [dpdk-dev] FW: Vhost user no connection vm2vm
Date: Thu, 21 May 2015 09:12:27 +0000 [thread overview]
Message-ID: <47CEA9C0E570484FBF22EF0D7EBCE5B534AC4253@szxema505-mbs.china.huawei.com> (raw)
In-Reply-To: <CALPCkO9Y1dYbLJeFvHwuy12U=4pniRD5ckO=+oRaawVaqu1V=Q@mail.gmail.com>
Hi Maciej
Did you solve your problem? I meet this problem as your case. And I found avail_idx(in rte_vhost_dequeue_burst function) is always zero although I do send packets in VM.
Thanks.
> Hello, I have strange issue with example/vhost app.
>
> I had compiled DPDK to run a vhost example app with followed flags
>
> CONFIG_RTE_LIBRTE_VHOST=y
> CONFIG_RTE_LIBRTE_VHOST_USER=y
> CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
>
> then I run vhost app based on documentation:
>
> ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge --socket-mem
> 3712
> -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
>
> -I use this strange --socket-mem 3712 because of physical limit of
> memoryon device -with this vhost user I run two KVM machines with
> followed parameters
>
> kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
> host -smp 2 -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m
> 1024 -mem-path /mnt/huge -mem-prealloc -chardev
> socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost
> -netdev type=vhost-user,id=hostnet1,chardev=char1
> -device virtio-net
> pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6
> =
> off,guest_ecn=off
> -chardev
> socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost
> -netdev type=vhost-user,id=hostnet2,chardev=char2
> -device
> virtio-net-
> pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6
> =
> off,guest_ecn=off
>
> After running KVM virtio correctly starting (below logs from vhost app) ...
> VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae00000 sz:0xa0000
> off:0x0
> VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb000000 sz:0x10000000
> off:0xc0000
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
> VHOST_CONFIG: vring kick idx:0 file:38
> VHOST_CONFIG: virtio isn't ready for processing.
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
> VHOST_CONFIG: vring kick idx:1 file:39
> VHOST_CONFIG: virtio is now ready for processing.
> VHOST_DATA: (1) Device has been added to data core 2
>
> So everything looking good.
>
> Maybe it is something trivial but using options: --vm2vm 1 (or) 2
> --stats 9 it seems that I didn't have connection between VM2VM
> communication. I set manually IP for eth0 and eth1:
>
> on 1 VM
> ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up ifconfig eth1
> 192.168.1.101 netmask 255.255.255.0 up
>
> on 2 VM
> ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up ifconfig eth1
> 192.168.0.202 netmask 255.255.255.0 up
>
> I notice that in vhostapp are one directional rx/tx queue so I tryied
> to ping between VM1 to VM2 using both interfaces ping -I eth0
> 192.168.1.200 ping -I
> eth1 192.168.1.200 ping -I eth0 192.168.0.202 ping -I eth1
> 192.168.0.202
>
> on VM2 using tcpdump on both interfaces I didn't see any ICMP requests
> or traffic
>
> And I cant ping between any IP/interfaces, moreover stats show me that:
>
> Device statistics ====================================
> Statistics for device 0 ------------------------------
> TX total: 0
> TX dropped: 0
> TX successful: 0
> RX total: 0
> RX dropped: 0
> RX successful: 0
> Statistics for device 1 ------------------------------
> TX total: 0
> TX dropped: 0
> TX successful: 0
> RX total: 0
> RX dropped: 0
> RX successful: 0
> Statistics for device 2 ------------------------------
> TX total: 0
> TX dropped: 0
> TX successful: 0
> RX total: 0
> RX dropped: 0
> RX successful: 0
> Statistics for device 3 ------------------------------
> TX total: 0
> TX dropped: 0
> TX successful: 0
> RX total: 0
> RX dropped: 0
> RX successful: 0
> ======================================================
>
> So it seems like any packet didn't leave my VM.
> also arp table is empty on each VM.
next prev parent reply other threads:[~2015-05-21 9:17 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-15 10:15 [dpdk-dev] " Maciej Grochowski
2015-05-17 14:41 ` Ouyang, Changchun
2015-05-21 9:12 ` Gaohaifeng (A) [this message]
2015-05-22 8:05 ` [dpdk-dev] FW: " Maciej Grochowski
2015-05-22 8:26 ` Ouyang, Changchun
2015-05-22 8:54 ` [dpdk-dev] " Gaohaifeng (A)
2015-05-22 9:28 ` Maciej Grochowski
2015-05-22 9:58 ` Tetsuya Mukawa
2015-05-22 10:04 ` Maciej Grochowski
2015-05-22 10:54 ` Andriy Berestovskyy
2015-05-22 17:59 ` Maciej Grochowski
2015-05-25 4:15 ` Gaohaifeng (A)
2015-05-22 9:27 ` [dpdk-dev] FW: " Luke Gorrie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47CEA9C0E570484FBF22EF0D7EBCE5B534AC4253@szxema505-mbs.china.huawei.com \
--to=gaohaifeng.gao@huawei.com \
--cc=dev@dpdk.org \
--cc=maciej.grochowski@codilime.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).