From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f172.google.com (mail-wi0-f172.google.com [209.85.212.172]) by dpdk.org (Postfix) with ESMTP id AB2933775 for ; Fri, 15 May 2015 12:15:17 +0200 (CEST) Received: by wicnf17 with SMTP id nf17so130952552wic.1 for ; Fri, 15 May 2015 03:15:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=u3BIyqNymWRA1BIqYliOLLilvSwNwhjIRkDs/pwrnOM=; b=m5sMRts/WfbACC6q7Gywl+Ld2ZK+COzis9MOdZL3TkiuTyX2PCh5bTcR+n32rMLAvO 3DEIgTs+AjuaNEAGq5zGUtX/RMEF1/+bUY0eUj0aPEfm/QtJqElnzR8Jci/S8WNYTPJ0 cynJxWfiE50CnLqDYgUgLShG7K5eo9a9elhBeqotf+LB8Q3DJj0tBHdyeh8GALHXovkP /ZdvbhN+lcP0QJdoBs6rYapyXzNdLCOgtZzdEiqOvNyVSYKlmk9mUYTB0W2slsCHF8DO GpUwxa+4CY9q976Sr87NQ6ZYt2yBSk9QJIZmOgTznTN7awR9uXHZ+nqjaBTiwOg8oiUe 2B9w== X-Gm-Message-State: ALoCoQnGKAa+ezhaW+S4CLSsGzs9SeB7k6WVym8M6KdepnaMZs5TkM3hi0dxOOL8GsKTtu0TR+yD MIME-Version: 1.0 X-Received: by 10.194.58.11 with SMTP id m11mr17129844wjq.92.1431684917495; Fri, 15 May 2015 03:15:17 -0700 (PDT) Received: by 10.27.127.86 with HTTP; Fri, 15 May 2015 03:15:17 -0700 (PDT) Date: Fri, 15 May 2015 12:15:17 +0200 Message-ID: From: Maciej Grochowski To: dev@dpdk.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] Vhost user no connection vm2vm X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 May 2015 10:15:17 -0000 Hello, I have strange issue with example/vhost app. I had compiled DPDK to run a vhost example app with followed flags CONFIG_RTE_LIBRTE_VHOST=y CONFIG_RTE_LIBRTE_VHOST_USER=y CONFIG_RTE_LIBRTE_VHOST_DEBUG=n then I run vhost app based on documentation: ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge --socket-mem 3712 -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9 -I use this strange --socket-mem 3712 because of physical limit of memoryon device -with this vhost user I run two KVM machines with followed parameters kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host -smp 2 -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 1024 -mem-path /mnt/huge -mem-prealloc -chardev socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost -netdev type=vhost-user,id=hostnet1,chardev=char1 -device virtio-net pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off -chardev socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost -netdev type=vhost-user,id=hostnet2,chardev=char2 -device virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off After running KVM virtio correctly starting (below logs from vhost app) ... VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae00000 sz:0xa0000 off:0x0 VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb000000 sz:0x10000000 off:0xc0000 VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK VHOST_CONFIG: vring kick idx:0 file:38 VHOST_CONFIG: virtio isn't ready for processing. VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK VHOST_CONFIG: vring kick idx:1 file:39 VHOST_CONFIG: virtio is now ready for processing. VHOST_DATA: (1) Device has been added to data core 2 So everything looking good. Maybe it is something trivial but using options: --vm2vm 1 (or) 2 --stats 9 it seems that I didn't have connection between VM2VM communication. I set manually IP for eth0 and eth1: on 1 VM ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up ifconfig eth1 192.168.1.101 netmask 255.255.255.0 up on 2 VM ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up ifconfig eth1 192.168.0.202 netmask 255.255.255.0 up I notice that in vhostapp are one directional rx/tx queue so I tryied to ping between VM1 to VM2 using both interfaces ping -I eth0 192.168.1.200 ping -I eth1 192.168.1.200 ping -I eth0 192.168.0.202 ping -I eth1 192.168.0.202 on VM2 using tcpdump on both interfaces I didn't see any ICMP requests or traffic And I cant ping between any IP/interfaces, moreover stats show me that: Device statistics ==================================== Statistics for device 0 ------------------------------ TX total: 0 TX dropped: 0 TX successful: 0 RX total: 0 RX dropped: 0 RX successful: 0 Statistics for device 1 ------------------------------ TX total: 0 TX dropped: 0 TX successful: 0 RX total: 0 RX dropped: 0 RX successful: 0 Statistics for device 2 ------------------------------ TX total: 0 TX dropped: 0 TX successful: 0 RX total: 0 RX dropped: 0 RX successful: 0 Statistics for device 3 ------------------------------ TX total: 0 TX dropped: 0 TX successful: 0 RX total: 0 RX dropped: 0 RX successful: 0 ====================================================== So it seems like any packet didn't leave my VM. also arp table is empty on each VM. ifconfig -a show that no packet come across eth0, eth1 which I used with ping, but everything come across local loopback eth0 Link encap:Ethernet HWaddr 52:54:00:12:34:56 inet addr:192.168.0.200 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 52:54:00:12:34:57 inet addr:192.168.1.202 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe12:3457/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:53 errors:0 dropped:0 overruns:0 frame:0 TX packets:53 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5936 (5.7 KiB) TX bytes:5936 (5.7 KiB) Do You have any idea what can be wrong with configuration between VM2VM? Any help would be appreciated.