From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2E4AA0525; Fri, 21 Feb 2020 11:22:47 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 690FA1BFAD; Fri, 21 Feb 2020 11:22:47 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id B2F0137B0 for ; Fri, 21 Feb 2020 11:22:44 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Feb 2020 02:22:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,467,1574150400"; d="scan'208";a="229166675" Received: from dpdk-yinan-purley.sh.intel.com ([10.67.117.227]) by fmsmga007.fm.intel.com with ESMTP; 21 Feb 2020 02:22:41 -0800 From: Yinan To: dts@dpdk.org Cc: Wang Yinan Date: Fri, 21 Feb 2020 03:17:21 +0000 Message-Id: <20200221031721.113758-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dts] [PATCH v1] test_plans/pvp_vhost_user_reconnect: merge packed ring cases in same test plan X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" From: Wang Yinan Signed-off-by: Wang Yinan --- .../pvp_vhost_user_reconnect_test_plan.rst | 377 +++++++++++++++++- 1 file changed, 356 insertions(+), 21 deletions(-) diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst index 9cc1ddc..bea5397 100644 --- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst +++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst @@ -46,15 +46,17 @@ Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh * DPDK vhost-user acts as the client: Unlike the server mode, this mode doesn't create the socket file;it just tries to connect to the server (which responses to create the file instead). When the DPDK vhost-user application restarts, DPDK vhost-user will try to connect to the server again. This is how the "reconnect" feature works. - When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. Note - that QEMU version v2.7 or above is required for this reconnect feature. - Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. This is useful in two cases: + When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. + Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. + This is useful in two cases: * When QEMU is not started yet. * When QEMU restarts (for example due to a guest OS reboot). -Test Case1: vhost-user/virtio-pmd pvp reconnect from vhost-user -=============================================================== +Note that QEMU version v2.7 or above is required for split ring cases, and QEMU version v4.2.0 or above is required for packed ring cases. + +Test Case1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user +========================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 1. Bind one port to igb_uio, then launch vhost with client mode by below commands:: @@ -98,8 +100,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG testpmd>show port stats all -Test Case2: vhost-user/virtio-pmd pvp reconnect from VM -======================================================= +Test Case2: vhost-user/virtio-pmd pvp split ring reconnect from VM +================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 1. Bind one port to igb_uio, then launch vhost with client mode by below commands:: @@ -134,8 +136,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. Reboot the VM, rerun step2-step4, check the reconnection can be established. -Test Case3: vhost-user/virtio-pmd pvp reconnect stability test -============================================================== +Test Case3: vhost-user/virtio-pmd pvp split ring reconnect stability test +========================================================================= Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG Similar as Test Case1, all steps are similar except step 5, 6. @@ -144,8 +146,8 @@ Similar as Test Case1, all steps are similar except step 5, 6. 6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. -Test Case 4: vhost-user/virtio-pmd pvp with multi VMs reconnect from vhost-user -=============================================================================== +Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from vhost-user +========================================================================================== 1. Bind one port to igb_uio, launch the vhost by below command:: @@ -206,8 +208,8 @@ Test Case 4: vhost-user/virtio-pmd pvp with multi VMs reconnect from vhost-user testpmd>show port stats all -Test Case 5: vhost-user/virtio-pmd pvp with multi VMs reconnect from VMs -======================================================================== +Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from VMs +=================================================================================== 1. Bind one port to igb_uio, launch the vhost by below command:: @@ -263,8 +265,8 @@ Test Case 5: vhost-user/virtio-pmd pvp with multi VMs reconnect from VMs testpmd>show port stats all -Test Case 6: vhost-user/virtio-pmd pvp with multi VMs reconnect stability test -============================================================================== +Test Case 6: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect stability test +========================================================================================= Similar as Test Case 4, all steps are similar except step 6, 7. @@ -272,8 +274,8 @@ Similar as Test Case 4, all steps are similar except step 6, 7. 7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. -Test Case 7: vhost-user/virtio-net VM2VM reconnect from vhost-user -================================================================== +Test Case 7: vhost-user/virtio-net VM2VM split ring reconnect from vhost-user +============================================================================= Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: @@ -327,8 +329,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. -Test Case 8: vhost-user/virtio-net VM2VM reconnect from VMs -=========================================================== +Test Case 8: vhost-user/virtio-net VM2VM split ring reconnect from VMs +====================================================================== Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: @@ -376,8 +378,341 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 6. Reboot VM1 and VM2, rerun step3-step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. -Test Case 9: vhost-user/virtio-net VM2VM reconnect stability test -================================================================= +Test Case 9: vhost-user/virtio-net VM2VM split ring reconnect stability test +============================================================================ +Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 + +Similar as Test Case 7, all steps are similar except step 6. + +6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. + +7. Reboot two VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. + +Test Case10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user +=========================================================================== +Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG + +1. Bind one port to igb_uio, then launch vhost with client mode by below commands:: + + ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + testpmd>set fwd mac + testpmd>start + +2. Start VM with 1 virtio device, and set the qemu as server mode:: + + qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :10 + +3. On VM, bind virtio net to igb_uio and run testpmd:: + + ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd:: + + testpmd>show port stats all + +5. On host, quit vhost-user, then re-launch the vhost-user with below command:: + + testpmd>quit + ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + testpmd>set fwd mac + testpmd>start + +6. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with virtio-pmd:: + + testpmd>show port stats all + +Test Case11: vhost-user/virtio-pmd pvp packed ring reconnect from VM +==================================================================== +Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG + +1. Bind one port to igb_uio, then launch vhost with client mode by below commands:: + + ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + testpmd>set fwd mac + testpmd>start + +2. Start VM with 1 virtio device, and set the qemu as server mode:: + + qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :10 + +3. On VM, bind virtio net to igb_uio and run testpmd:: + + ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd:: + + testpmd>show port stats all + +5. Reboot the VM, rerun step2-step4, check the reconnection can be established. + +Test Case12: vhost-user/virtio-pmd pvp packed ring reconnect stability test +=========================================================================== +Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG + +Similar as Test Case1, all steps are similar except step 5, 6. + +5. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. + +6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. + +Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from vhost-user +============================================================================================ + +1. Bind one port to igb_uio, launch the vhost by below command:: + + ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +2. Launch VM1 and VM2:: + + qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :10 + + qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :11 + +3. On VM1, bind virtio1 to igb_uio and run testpmd:: + + ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. On VM2, bind virtio2 to igb_uio and run testpmd:: + + ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: + + testpmd>show port stats all + +6. On host, quit vhost-user, then re-launch the vhost-user with below command:: + + testpmd>quit + ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: + + testpmd>show port stats all + +Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from VMs +===================================================================================== + +1. Bind one port to igb_uio, launch the vhost by below command:: + + ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +2. Launch VM1 and VM2:: + + qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :10 + + qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :11 + +3. On VM1, bind virtio1 to igb_uio and run testpmd:: + + ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. On VM2, bind virtio2 to igb_uio and run testpmd:: + + ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: + + testpmd>show port stats all + +6. Reboot the two VMs, rerun step2-step5. + +7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs:: + + testpmd>show port stats all + +Test Case 15: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect stability test +=========================================================================================== + +Similar as Test Case 4, all steps are similar except step 6, 7. + +6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. + +7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. + +Test Case 16: vhost-user/virtio-net VM2VM packed ring reconnect from vhost-user +=============================================================================== +Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 + +1. Launch the vhost by below commands, enable the client mode and tso:: + + ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +3. Launch VM1 and VM2:: + + qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :10 + + qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :11 + +4. Set virtio device IP and run arp protocal on two VMs:: + + VM1: ifconfig ens4 1.1.1.2 + VM2: ifconfig ens4 1.1.1.3 + VM1: arp -s 1.1.1.3 52:54:00:00:00:02 + VM2: arp -s 1.1.1.2 52:54:00:00:00:01 + +5. Run iperf on VM1 and VM2, check the tso enabled performance for 1 min:: + + VM1: iperf -s -i 1 -t 60 + VM2: iperf -c 1.1.1.2 -t 60 -i 1 + +6. Kill the vhost-user, then re-launch the vhost-user:: + + testpmd>quit + ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. + +Test Case 17: vhost-user/virtio-net VM2VM packed ring reconnect from VMs +======================================================================== +Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 + +1. Launch the vhost by below commands, enable the client mode and tso:: + + ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +3. Launch VM1 and VM2:: + + qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :10 + + qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :11 + +4. Set virtio device IP and run arp protocal on two VMs:: + + VM1: ifconfig ens4 1.1.1.2 + VM2: ifconfig ens4 1.1.1.3 + VM1: arp -s 1.1.1.3 52:54:00:00:00:02 + VM2: arp -s 1.1.1.2 52:54:00:00:00:01 + +5. Run iperf on VM1 and VM2, check the tso enabled performance for 1 min:: + + VM1: iperf -s -i 1 -t 60 + VM2: iperf -c 1.1.1.2 -t 60 -i 1 + +6. Reboot VM1 and VM2, rerun step3-step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. + +Test Case 18: vhost-user/virtio-net VM2VM packed ring reconnect stability test +============================================================================== Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 Similar as Test Case 7, all steps are similar except step 6. -- 2.17.1