test suite reviews and discussions
 help / color / mirror / Atom feed
From: Yinan <yinan.wang@intel.com>
To: dts@dpdk.org
Cc: Wang Yinan <yinan.wang@intel.com>
Subject: [dts] [PATCH v1] test_plans/pvp_vhost_user_reconnect: merge packed ring cases in same test plan
Date: Fri, 21 Feb 2020 03:17:21 +0000	[thread overview]
Message-ID: <20200221031721.113758-1-yinan.wang@intel.com> (raw)

From: Wang Yinan <yinan.wang@intel.com>

Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
 .../pvp_vhost_user_reconnect_test_plan.rst    | 377 +++++++++++++++++-
 1 file changed, 356 insertions(+), 21 deletions(-)

diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
index 9cc1ddc..bea5397 100644
--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
@@ -46,15 +46,17 @@ Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh
 * DPDK vhost-user acts as the client:
   Unlike the server mode, this mode doesn't create the socket file;it just tries to connect to the server (which responses to create the file instead).
   When the DPDK vhost-user application restarts, DPDK vhost-user will try to connect to the server again. This is how the "reconnect" feature works.
-  When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. Note
-  that QEMU version v2.7 or above is required for this reconnect feature.
-  Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. This is useful in two cases:
+  When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. 
+  Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. 
+  This is useful in two cases:
 
     * When QEMU is not started yet.
     * When QEMU restarts (for example due to a guest OS reboot).
 
-Test Case1: vhost-user/virtio-pmd pvp reconnect from vhost-user
-===============================================================
+Note that QEMU version v2.7 or above is required for split ring cases, and QEMU version v4.2.0 or above is required for packed ring cases.
+
+Test Case1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user
+==========================================================================
 Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
@@ -98,8 +100,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
     testpmd>show port stats all
 
-Test Case2: vhost-user/virtio-pmd pvp reconnect from VM
-=======================================================
+Test Case2: vhost-user/virtio-pmd pvp split ring reconnect from VM
+==================================================================
 Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
@@ -134,8 +136,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 5. Reboot the VM, rerun step2-step4, check the reconnection can be established.
 
-Test Case3: vhost-user/virtio-pmd pvp reconnect stability test
-==============================================================
+Test Case3: vhost-user/virtio-pmd pvp split ring reconnect stability test
+=========================================================================
 Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 Similar as Test Case1, all steps are similar except step 5, 6.
@@ -144,8 +146,8 @@ Similar as Test Case1, all steps are similar except step 5, 6.
 
 6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue.
 
-Test Case 4: vhost-user/virtio-pmd pvp with multi VMs reconnect from vhost-user
-===============================================================================
+Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from vhost-user
+==========================================================================================
 
 1. Bind one port to igb_uio, launch the vhost by below command::
 
@@ -206,8 +208,8 @@ Test Case 4: vhost-user/virtio-pmd pvp with multi VMs reconnect from vhost-user
 
     testpmd>show port stats all
 
-Test Case 5: vhost-user/virtio-pmd pvp with multi VMs reconnect from VMs
-========================================================================
+Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from VMs
+===================================================================================
 
 1. Bind one port to igb_uio, launch the vhost by below command::
 
@@ -263,8 +265,8 @@ Test Case 5: vhost-user/virtio-pmd pvp with multi VMs reconnect from VMs
 
     testpmd>show port stats all
 
-Test Case 6: vhost-user/virtio-pmd pvp with multi VMs reconnect stability test
-==============================================================================
+Test Case 6: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect stability test
+=========================================================================================
 
 Similar as Test Case 4, all steps are similar except step 6, 7.
 
@@ -272,8 +274,8 @@ Similar as Test Case 4, all steps are similar except step 6, 7.
 
 7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue.
 
-Test Case 7: vhost-user/virtio-net VM2VM reconnect from vhost-user
-==================================================================
+Test Case 7: vhost-user/virtio-net VM2VM split ring reconnect from vhost-user
+=============================================================================
 Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 1. Launch the vhost by below commands, enable the client mode and tso::
@@ -327,8 +329,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue.
 
-Test Case 8: vhost-user/virtio-net VM2VM reconnect from VMs
-===========================================================
+Test Case 8: vhost-user/virtio-net VM2VM split ring reconnect from VMs
+======================================================================
 Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 1. Launch the vhost by below commands, enable the client mode and tso::
@@ -376,8 +378,341 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 6. Reboot VM1 and VM2, rerun step3-step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue.
 
-Test Case 9: vhost-user/virtio-net VM2VM reconnect stability test
-=================================================================
+Test Case 9: vhost-user/virtio-net VM2VM split ring reconnect stability test
+============================================================================
+Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
+
+Similar as Test Case 7, all steps are similar except step 6.
+
+6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue.
+
+7. Reboot two VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue.
+
+Test Case10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user
+===========================================================================
+Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
+
+1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
+
+    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    testpmd>set fwd mac
+    testpmd>start
+
+2. Start VM with 1 virtio device, and set the qemu as server mode::
+
+    qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :10
+
+3. On VM, bind virtio net to igb_uio and run testpmd::
+
+    ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+
+    testpmd>show port stats all
+
+5. On host, quit vhost-user, then re-launch the vhost-user with below command::
+
+    testpmd>quit
+    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    testpmd>set fwd mac
+    testpmd>start
+
+6. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+
+    testpmd>show port stats all
+
+Test Case11: vhost-user/virtio-pmd pvp packed ring reconnect from VM
+====================================================================
+Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
+
+1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
+
+    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    testpmd>set fwd mac
+    testpmd>start
+
+2. Start VM with 1 virtio device, and set the qemu as server mode::
+
+    qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :10
+
+3. On VM, bind virtio net to igb_uio and run testpmd::
+
+    ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+
+    testpmd>show port stats all
+
+5. Reboot the VM, rerun step2-step4, check the reconnection can be established.
+
+Test Case12: vhost-user/virtio-pmd pvp packed ring reconnect stability test
+===========================================================================
+Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
+
+Similar as Test Case1, all steps are similar except step 5, 6.
+
+5. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue.
+
+6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue.
+
+Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from vhost-user
+============================================================================================
+
+1. Bind one port to igb_uio, launch the vhost by below command::
+
+    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :10
+
+    qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :11
+
+3. On VM1, bind virtio1 to igb_uio and run testpmd::
+
+    ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+4. On VM2, bind virtio2 to igb_uio and run testpmd::
+
+    ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+
+    testpmd>show port stats all
+
+6. On host, quit vhost-user, then re-launch the vhost-user with below command::
+
+    testpmd>quit
+    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+
+    testpmd>show port stats all
+
+Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from VMs
+=====================================================================================
+
+1. Bind one port to igb_uio, launch the vhost by below command::
+
+    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+2. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :10
+
+    qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :11
+
+3. On VM1, bind virtio1 to igb_uio and run testpmd::
+
+    ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+4. On VM2, bind virtio2 to igb_uio and run testpmd::
+
+    ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>set fwd mac
+    testpmd>start
+
+5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+
+    testpmd>show port stats all
+
+6. Reboot the two VMs, rerun step2-step5.
+
+7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+
+    testpmd>show port stats all
+
+Test Case 15: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect stability test
+===========================================================================================
+
+Similar as Test Case 4, all steps are similar except step 6, 7.
+
+6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue.
+
+7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue.
+
+Test Case 16: vhost-user/virtio-net VM2VM packed ring reconnect from vhost-user
+===============================================================================
+Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
+
+1. Launch the vhost by below commands, enable the client mode and tso::
+
+    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+3. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :10
+
+    qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :11
+
+4. Set virtio device IP and run arp protocal on two VMs::
+
+    VM1: ifconfig ens4 1.1.1.2
+    VM2: ifconfig ens4 1.1.1.3
+    VM1: arp -s 1.1.1.3 52:54:00:00:00:02
+    VM2: arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Run iperf on VM1 and VM2, check the tso enabled performance for 1 min::
+
+    VM1: iperf -s -i 1 -t 60
+    VM2: iperf -c 1.1.1.2 -t 60 -i 1
+
+6. Kill the vhost-user, then re-launch the vhost-user::
+
+    testpmd>quit
+    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue.
+
+Test Case 17: vhost-user/virtio-net VM2VM packed ring reconnect from VMs
+========================================================================
+Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
+
+1. Launch the vhost by below commands, enable the client mode and tso::
+
+    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    testpmd>start
+
+3. Launch VM1 and VM2::
+
+    qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 2 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :10
+
+    qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 12 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6001-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
+    -vnc :11
+
+4. Set virtio device IP and run arp protocal on two VMs::
+
+    VM1: ifconfig ens4 1.1.1.2
+    VM2: ifconfig ens4 1.1.1.3
+    VM1: arp -s 1.1.1.3 52:54:00:00:00:02
+    VM2: arp -s 1.1.1.2 52:54:00:00:00:01
+
+5. Run iperf on VM1 and VM2, check the tso enabled performance for 1 min::
+
+    VM1: iperf -s -i 1 -t 60
+    VM2: iperf -c 1.1.1.2 -t 60 -i 1
+
+6. Reboot VM1 and VM2, rerun step3-step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue.
+
+Test Case 18: vhost-user/virtio-net VM2VM packed ring reconnect stability test
+==============================================================================
 Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 Similar as Test Case 7, all steps are similar except step 6.
-- 
2.17.1


             reply	other threads:[~2020-02-21 10:22 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-21  3:17 Yinan [this message]
2020-02-22 12:18 ` Tu, Lijuan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200221031721.113758-1-yinan.wang@intel.com \
    --to=yinan.wang@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).