From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85685A0525; Fri, 21 Feb 2020 11:26:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 70DC41BFAD; Fri, 21 Feb 2020 11:26:51 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id CE5A537B0 for ; Fri, 21 Feb 2020 11:26:49 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Feb 2020 02:26:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,467,1574150400"; d="scan'208";a="230404851" Received: from dpdk-yinan-purley.sh.intel.com ([10.67.117.227]) by fmsmga008.fm.intel.com with ESMTP; 21 Feb 2020 02:26:47 -0800 From: Yinan To: dts@dpdk.org Cc: Wang Yinan Date: Fri, 21 Feb 2020 03:21:27 +0000 Message-Id: <20200221032127.113879-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dts] [PATCH v1] test_plans: move cases to pvp_vhost_user_reconnect_test_plan.rst X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" From: Wang Yinan Signed-off-by: Wang Yinan --- ...ed_ring_vhost_user_reconnect_test_plan.rst | 387 ------------------ 1 file changed, 387 deletions(-) delete mode 100644 test_plans/pvp_packed_ring_vhost_user_reconnect_test_plan.rst diff --git a/test_plans/pvp_packed_ring_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_packed_ring_vhost_user_reconnect_test_plan.rst deleted file mode 100644 index eca8abe..0000000 --- a/test_plans/pvp_packed_ring_vhost_user_reconnect_test_plan.rst +++ /dev/null @@ -1,387 +0,0 @@ -.. Copyright (c) <2019>, Intel Corporation - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED - OF THE POSSIBILITY OF SUCH DAMAGE. - -=================================== -Packed ring pvp reconnect test plan -=================================== - -Description -=========== - -Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vhost-user implementation has two options: - -* DPDK vhost-user acts as the server: - DPDK will create a Unix domain socket server file and listen for connections from the frontend. - Note, this is the default mode, and the only mode before DPDK v16.07. - -* DPDK vhost-user acts as the client: - Unlike the server mode, this mode doesn't create the socket file;it just tries to connect to the server (which responses to create the file instead). - When the DPDK vhost-user application restarts, DPDK vhost-user will try to connect to the server again. This is how the "reconnect" feature works. - When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. Note - that QEMU version v4.2.0 or above is required for this reconnect feature. - Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. This is useful in two cases: - - * When QEMU is not started yet. - * When QEMU restarts (for example due to a guest OS reboot). - -Test Case1: vhost-user/virtio-pmd pvp reconnect from vhost-user -=============================================================== -Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG - -1. Bind one port to igb_uio, then launch vhost with client mode by below commands:: - - ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 - testpmd>set fwd mac - testpmd>start - -2. Start VM with 1 virtio device, and set the qemu as server mode:: - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :10 - -3. On VM, bind virtio net to igb_uio and run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd:: - - testpmd>show port stats all - -5. On host, quit vhost-user, then re-launch the vhost-user with below command:: - - testpmd>quit - ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 - testpmd>set fwd mac - testpmd>start - -6. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with virtio-pmd:: - - testpmd>show port stats all - -Test Case2: vhost-user/virtio-pmd pvp reconnect from VM -======================================================= -Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG - -1. Bind one port to igb_uio, then launch vhost with client mode by below commands:: - - ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 - testpmd>set fwd mac - testpmd>start - -2. Start VM with 1 virtio device, and set the qemu as server mode:: - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :10 - -3. On VM, bind virtio net to igb_uio and run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd:: - - testpmd>show port stats all - -5. Reboot the VM, rerun step2-step4, check the reconnection can be established. - -Test Case3: vhost-user/virtio-pmd pvp reconnect stability test -============================================================== -Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG - -Similar as Test Case1, all steps are similar except step 5, 6. - -5. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. - -6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. - -Test Case 4: vhost-user/virtio-pmd pvp with multi VMs reconnect from vhost-user -=============================================================================== - -1. Bind one port to igb_uio, launch the vhost by below command:: - - ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -2. Launch VM1 and VM2:: - - qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 12 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :10 - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :11 - -3. On VM1, bind virtio1 to igb_uio and run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -4. On VM2, bind virtio2 to igb_uio and run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: - - testpmd>show port stats all - -6. On host, quit vhost-user, then re-launch the vhost-user with below command:: - - testpmd>quit - ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: - - testpmd>show port stats all - -Test Case 5: vhost-user/virtio-pmd pvp with multi VMs reconnect from VMs -======================================================================== - -1. Bind one port to igb_uio, launch the vhost by below command:: - - ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -2. Launch VM1 and VM2:: - - qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :10 - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :11 - -3. On VM1, bind virtio1 to igb_uio and run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -4. On VM2, bind virtio2 to igb_uio and run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: - - testpmd>show port stats all - -6. Reboot the two VMs, rerun step2-step5. - -7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: - - testpmd>show port stats all - -Test Case 6: vhost-user/virtio-pmd pvp with multi VMs reconnect stability test -============================================================================== - -Similar as Test Case 4, all steps are similar except step 6, 7. - -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. - -7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. - -Test Case 7: vhost-user/virtio-net VM2VM reconnect from vhost-user -================================================================== -Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 - -1. Launch the vhost by below commands, enable the client mode and tso:: - - ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>start - -3. Launch VM1 and VM2:: - - qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :10 - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :11 - -4. Set virtio device IP and run arp protocal on two VMs:: - - VM1: ifconfig ens4 1.1.1.2 - VM2: ifconfig ens4 1.1.1.3 - VM1: arp -s 1.1.1.3 52:54:00:00:00:02 - VM2: arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Run iperf on VM1 and VM2, check the tso enabled performance for 1 min:: - - VM1: iperf -s -i 1 -t 60 - VM2: iperf -c 1.1.1.2 -t 60 -i 1 - -6. Kill the vhost-user, then re-launch the vhost-user:: - - testpmd>quit - ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>start - -7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. - -Test Case 8: vhost-user/virtio-net VM2VM reconnect from VMs -=========================================================== -Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 - -1. Launch the vhost by below commands, enable the client mode and tso:: - - ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>start - -3. Launch VM1 and VM2:: - - qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :10 - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ - -vnc :11 - -4. Set virtio device IP and run arp protocal on two VMs:: - - VM1: ifconfig ens4 1.1.1.2 - VM2: ifconfig ens4 1.1.1.3 - VM1: arp -s 1.1.1.3 52:54:00:00:00:02 - VM2: arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Run iperf on VM1 and VM2, check the tso enabled performance for 1 min:: - - VM1: iperf -s -i 1 -t 60 - VM2: iperf -c 1.1.1.2 -t 60 -i 1 - -6. Reboot VM1 and VM2, rerun step3-step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. - -Test Case 9: vhost-user/virtio-net VM2VM reconnect stability test -================================================================= -Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 - -Similar as Test Case 7, all steps are similar except step 6. - -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. - -7. Reboot two VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. \ No newline at end of file -- 2.17.1