test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts][PATCH V1 1/5] test_plans/vm2vm_virtio_net_perf_test_plan: delete CBDMA test case
@ 2022-04-06  9:09 Wei Ling
  0 siblings, 0 replies; 2+ messages in thread
From: Wei Ling @ 2022-04-06  9:09 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path),
delete cbdma related case form test_plan/vm2vm_virtio_net_perf_test_plan.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 .../vm2vm_virtio_net_perf_test_plan.rst       | 720 ++----------------
 1 file changed, 84 insertions(+), 636 deletions(-)

diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 6e679b5b..9787b658 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -44,88 +44,62 @@ in the UDP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net
 and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 3. Check Vhost tx offload function by verifying the TSO/cksum in the TCP/IP stack with vm2vm split ring and
 packed ring vhost-user/virtio-net mergeable path with CBDMA channel.
-4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost enqueue operation with multi-CBDMA channels.
+4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost
+enqueue operation with multi-CBDMA channels.
+
 Note: 
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1,
+DUT to old qemu exist reconnect issue when multi-queues test.
 3.For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.
 
-Test flow
-=========
-
-Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net
-
-Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
-=========================================================================
-
-1. Launch the Vhost sample on socket 0 by below commands::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
-    -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>start
-
-2. Launch VM1 and VM2 on socket 1::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+For more about dpdk-testpmd sample, please refer to the DPDK docments:
+https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html
 
-    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+For virtio-user vdev parameter, you can refer to the DPDK docments:
+https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.
 
-3. On VM1, set virtio device IP and run arp protocol::
+Prerequisites
+=============
 
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
+Topology
+--------
+      Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net
 
-4. On VM2, set virtio device IP and run arp protocol::
+Hardware
+--------
+      Supportted NICs: ALL
 
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
+Software
+--------
+      Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
 
-5. Check the iperf performance with different packet size between two VMs by below commands::
+General set up
+--------------
+1. Compile DPDK::
 
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+      # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
+      # ninja -C <dpdk build dir> -j 110
 
-6. Check 2VMs can receive and send big packets to each other::
+Test case
+=========
 
-    testpmd>show port xstats all
-    Port 0 should have tx packets above 1522
-    Port 1 should have rx packets above 1522
+Common steps
+------------
 
-Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic
-======================================================================================
+Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
+-------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs.
 
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
+1. Launch the Vhost sample on socket 0 by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:80:04.0]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:80:04.1]' \
-    --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>vhost enable tx all
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
+    -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 on socket 1::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -136,7 +110,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
     taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -147,7 +122,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -159,7 +135,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     ifconfig ens5 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
 
-5. Check the iperf performance between two VMs by below commands::
+5. Check the iperf performance with different packet size between two VMs by below commands::
 
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
@@ -170,18 +146,16 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-7. Check throughput and compare with case1, CBDMA enable performance should larger than w/o CBDMA performance when cross socket.
-
-Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
-=========================================================================
+Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic
+-------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2::
@@ -195,7 +169,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -206,7 +181,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -229,13 +205,13 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 4: Check split ring virtio-net device capability
-==========================================================
+Test Case 3: Check split ring virtio-net device capability
+----------------------------------------------------------
+This case uses testpmd and QEMU to test split ring device capability in 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -252,7 +228,8 @@ Test Case 4: Check split ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -263,7 +240,8 @@ Test Case 4: Check split ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
 
 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
 
@@ -279,247 +257,13 @@ Test Case 4: Check split ring virtio-net device capability
     tx-tcp-ecn-segmentation: on
     tx-tcp6-segmentation: on
 
-Test Case 5: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check
-==============================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 using qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Quit and relaunch vhost w/ diff CBDMA channels::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-     --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-     testpmd>vhost enable tx all
-     testpmd>start
-
-8. Rerun step 5-6.
-
-9. Quit and relaunch vhost w/ iova=pa::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-     --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-     testpmd>vhost enable tx all
-     testpmd>start
-
-10. Rerun step 5-6.
-
-11. Quit and relaunch vhost w/o CBDMA channels::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \
-     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
-     testpmd>vhost enable tx all
-     testpmd>start
-
-12. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 4
-
-13. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 4
-
-14. Scp 1MB file form VM1 to VM2::
-
-      Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-15. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-      Under VM1, run: `iperf -s -i 1`
-      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-16. Quit and relaunch vhost with 1 queues::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \
-     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>vhost enable tx all
-     testpmd>start
-
-17. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-18. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-19. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-      Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-20. Check the iperf performance, ensure queue0 can work from vhost side::
-
-      Under VM1, run: `iperf -s -i 1`
-      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-Test Case 6: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check
-==================================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 using qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Quit and relaunch vhost ports w/o CBDMA channels::
-
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \
-    -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-8. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-10. Quit and relaunch vhost ports with 1 queues::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \
-     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>vhost enable tx all
-     testpmd>start
-
-11. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-12. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-14. Check the iperf performance, ensure queue0 can work from vhost side::
-
-     Under VM1, run: `iperf -s -i 1`
-     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
-==========================================================================
+Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
+--------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -536,7 +280,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,\
+    mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -547,7 +292,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,\
+    mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -570,73 +316,13 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic
-=======================================================================================
-
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \
-    --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 on socket 1 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
-
-    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-6. Check 2VMs can receive and send big packets to each other::
-
-    testpmd>show port xstats all
-    Port 0 should have tx packets above 1522
-    Port 1 should have rx packets above 1522
-
-7. Check throughput and compare with case6, CBDMA enable performance should larger than w/o CBDMA performance when cross socket.
-
-Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
-==========================================================================
+Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic
+--------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -653,7 +339,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -664,7 +351,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -687,13 +375,13 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 10: Check packed ring virtio-net device capability
-============================================================
+Test Case 6: Check packed ring virtio-net device capability
+-----------------------------------------------------------
+This case uses testpmd and QEMU to test split ring device capability in 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -710,7 +398,8 @@ Test Case 10: Check packed ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -721,7 +410,8 @@ Test Case 10: Check packed ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12
 
 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
 
@@ -736,245 +426,3 @@ Test Case 10: Check packed ring virtio-net device capability
     tx-tcp-segmentation: on
     tx-tcp-ecn-segmentation: on
     tx-tcp6-segmentation: on
-
-Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check
-=====================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Rerun step 5-6 five times.
-
-Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check
-=========================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Rerun step 5-6 five times.
-
-Test Case 13: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa
-=========================================================================================================
-
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \
-    --iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 on socket 1 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
-
-    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Check 2VMs can receive and send big packets to each other::
-
-    testpmd>show port xstats all
-    Port 0 should have tx packets above 1522
-    Port 1 should have rx packets above 1522
-
-Test Case 14: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check
-=================================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Rerun step 5-6 five times.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dts][PATCH V1 1/5] test_plans/vm2vm_virtio_net_perf_test_plan: delete CBDMA test case
@ 2022-04-06  8:21 Wei Ling
  0 siblings, 0 replies; 2+ messages in thread
From: Wei Ling @ 2022-04-06  8:21 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path),
delete cbdma related case form test_plan/vm2vm_virtio_net_perf_test_plan.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 .../vm2vm_virtio_net_perf_test_plan.rst       | 720 ++----------------
 1 file changed, 84 insertions(+), 636 deletions(-)

diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 6e679b5b..9787b658 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -44,88 +44,62 @@ in the UDP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net
 and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 3. Check Vhost tx offload function by verifying the TSO/cksum in the TCP/IP stack with vm2vm split ring and
 packed ring vhost-user/virtio-net mergeable path with CBDMA channel.
-4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost enqueue operation with multi-CBDMA channels.
+4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost
+enqueue operation with multi-CBDMA channels.
+
 Note: 
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1,
+DUT to old qemu exist reconnect issue when multi-queues test.
 3.For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.
 
-Test flow
-=========
-
-Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net
-
-Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
-=========================================================================
-
-1. Launch the Vhost sample on socket 0 by below commands::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
-    -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>start
-
-2. Launch VM1 and VM2 on socket 1::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+For more about dpdk-testpmd sample, please refer to the DPDK docments:
+https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html
 
-    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+For virtio-user vdev parameter, you can refer to the DPDK docments:
+https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.
 
-3. On VM1, set virtio device IP and run arp protocol::
+Prerequisites
+=============
 
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
+Topology
+--------
+      Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net
 
-4. On VM2, set virtio device IP and run arp protocol::
+Hardware
+--------
+      Supportted NICs: ALL
 
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
+Software
+--------
+      Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
 
-5. Check the iperf performance with different packet size between two VMs by below commands::
+General set up
+--------------
+1. Compile DPDK::
 
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+      # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
+      # ninja -C <dpdk build dir> -j 110
 
-6. Check 2VMs can receive and send big packets to each other::
+Test case
+=========
 
-    testpmd>show port xstats all
-    Port 0 should have tx packets above 1522
-    Port 1 should have rx packets above 1522
+Common steps
+------------
 
-Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic
-======================================================================================
+Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
+-------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs.
 
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
+1. Launch the Vhost sample on socket 0 by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:80:04.0]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:80:04.1]' \
-    --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>vhost enable tx all
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
+    -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 on socket 1::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -136,7 +110,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
     taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -147,7 +122,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -159,7 +135,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     ifconfig ens5 1.1.1.8
     arp -s 1.1.1.2 52:54:00:00:00:01
 
-5. Check the iperf performance between two VMs by below commands::
+5. Check the iperf performance with different packet size between two VMs by below commands::
 
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
@@ -170,18 +146,16 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-7. Check throughput and compare with case1, CBDMA enable performance should larger than w/o CBDMA performance when cross socket.
-
-Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
-=========================================================================
+Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic
+-------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM1 and VM2::
@@ -195,7 +169,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -206,7 +181,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -229,13 +205,13 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 4: Check split ring virtio-net device capability
-==========================================================
+Test Case 3: Check split ring virtio-net device capability
+----------------------------------------------------------
+This case uses testpmd and QEMU to test split ring device capability in 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -252,7 +228,8 @@ Test Case 4: Check split ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -263,7 +240,8 @@ Test Case 4: Check split ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
 
 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
 
@@ -279,247 +257,13 @@ Test Case 4: Check split ring virtio-net device capability
     tx-tcp-ecn-segmentation: on
     tx-tcp6-segmentation: on
 
-Test Case 5: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check
-==============================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 using qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Quit and relaunch vhost w/ diff CBDMA channels::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-     --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-     testpmd>vhost enable tx all
-     testpmd>start
-
-8. Rerun step 5-6.
-
-9. Quit and relaunch vhost w/ iova=pa::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-     --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-     testpmd>vhost enable tx all
-     testpmd>start
-
-10. Rerun step 5-6.
-
-11. Quit and relaunch vhost w/o CBDMA channels::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \
-     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
-     testpmd>vhost enable tx all
-     testpmd>start
-
-12. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 4
-
-13. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 4
-
-14. Scp 1MB file form VM1 to VM2::
-
-      Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-15. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-      Under VM1, run: `iperf -s -i 1`
-      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-16. Quit and relaunch vhost with 1 queues::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \
-     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>vhost enable tx all
-     testpmd>start
-
-17. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-18. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-19. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-      Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-20. Check the iperf performance, ensure queue0 can work from vhost side::
-
-      Under VM1, run: `iperf -s -i 1`
-      Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-Test Case 6: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check
-==================================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 using qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Quit and relaunch vhost ports w/o CBDMA channels::
-
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \
-    -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-8. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-10. Quit and relaunch vhost ports with 1 queues::
-
-     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \
-     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>vhost enable tx all
-     testpmd>start
-
-11. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-12. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-14. Check the iperf performance, ensure queue0 can work from vhost side::
-
-     Under VM1, run: `iperf -s -i 1`
-     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
-==========================================================================
+Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
+--------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -536,7 +280,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,\
+    mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -547,7 +292,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,\
+    mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -570,73 +316,13 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic
-=======================================================================================
-
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \
-    --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 on socket 1 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
-
-    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-6. Check 2VMs can receive and send big packets to each other::
-
-    testpmd>show port xstats all
-    Port 0 should have tx packets above 1522
-    Port 1 should have rx packets above 1522
-
-7. Check throughput and compare with case6, CBDMA enable performance should larger than w/o CBDMA performance when cross socket.
-
-Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
-==========================================================================
+Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic
+--------------------------------------------------------------------------
+This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -653,7 +339,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -664,7 +351,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocol::
 
@@ -687,13 +375,13 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     Port 0 should have tx packets above 1522
     Port 1 should have rx packets above 1522
 
-Test Case 10: Check packed ring virtio-net device capability
-============================================================
+Test Case 6: Check packed ring virtio-net device capability
+-----------------------------------------------------------
+This case uses testpmd and QEMU to test split ring device capability in 2 VMs.
 
 1. Launch the Vhost sample by below commands::
 
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1' \
     -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -710,7 +398,8 @@ Test Case 10: Check packed ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -721,7 +410,8 @@ Test Case 10: Check packed ring virtio-net device capability
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12
 
 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
 
@@ -736,245 +426,3 @@ Test Case 10: Check packed ring virtio-net device capability
     tx-tcp-segmentation: on
     tx-tcp-ecn-segmentation: on
     tx-tcp6-segmentation: on
-
-Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check
-=====================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Rerun step 5-6 five times.
-
-Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check
-=========================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Rerun step 5-6 five times.
-
-Test Case 13: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa
-=========================================================================================================
-
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \
-    --iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 on socket 1 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
-
-    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Check 2VMs can receive and send big packets to each other::
-
-    testpmd>show port xstats all
-    Port 0 should have tx packets above 1522
-    Port 1 should have rx packets above 1522
-
-Test Case 14: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check
-=================================================================================================================================
-
-1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command::
-
-    rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
-    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \
-    --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>vhost enable tx all
-    testpmd>start
-
-2. Launch VM1 and VM2 with qemu::
-
-    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
-
-    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1 \
-    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
-
-3. On VM1, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.2
-    arp -s 1.1.1.8 52:54:00:00:00:02
-
-4. On VM2, set virtio device IP and run arp protocol::
-
-    ethtool -L ens5 combined 8
-    ifconfig ens5 1.1.1.8
-    arp -s 1.1.1.2 52:54:00:00:00:01
-
-5. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-6. Check the iperf performance between two VMs by below commands::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-7. Rerun step 5-6 five times.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-04-06  9:09 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06  9:09 [dts][PATCH V1 1/5] test_plans/vm2vm_virtio_net_perf_test_plan: delete CBDMA test case Wei Ling
  -- strict thread matches above, loose matches on Subject: below --
2022-04-06  8:21 Wei Ling

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).