test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts][PATCH V2 0/3] add vhost_async_robust_cbdma
@ 2023-03-28  1:58 Wei Ling
  2023-03-28  1:58 ` [dts][PATCH V2 1/3] test_plans/index: add vhost_async_robust_cbdma_test_plan Wei Ling
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  1:58 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Add new testplan and testsuite for testing Vhost asynchronous
data path robust with CBDMA driver.

Wei Ling (3):
  test_plans/index: add vhost_async_robust_cbdma_test_plan
  test_plans/vhost_async_robust_cbdma: add new testplan
  tests/vhost_async_robust_cbdma: add new testsuite

 test_plans/index.rst                          |   1 +
 .../vhost_async_robust_cbdma_test_plan.rst    | 281 +++++++
 tests/TestSuite_vhost_async_robust_cbdma.py   | 696 ++++++++++++++++++
 3 files changed, 978 insertions(+)
 create mode 100644 test_plans/vhost_async_robust_cbdma_test_plan.rst
 create mode 100644 tests/TestSuite_vhost_async_robust_cbdma.py

-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V2 1/3] test_plans/index: add vhost_async_robust_cbdma_test_plan
  2023-03-28  1:58 [dts][PATCH V2 0/3] add vhost_async_robust_cbdma Wei Ling
@ 2023-03-28  1:58 ` Wei Ling
  2023-03-28  1:58 ` [dts][PATCH V2 2/3] test_plans/vhost_async_robust_cbdma: add new testplan Wei Ling
  2023-03-28  1:58 ` [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite Wei Ling
  2 siblings, 0 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  1:58 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Add new vhost_async_robust_cbdma_test_plan in test_plans/index.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 test_plans/index.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 0770a935..cc5c43fe 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -155,6 +155,7 @@ The following are the test plans for the DPDK DTS automated test system.
     speed_capabilities_test_plan
     vhost_cbdma_test_plan
     vhost_dsa_test_plan
+    vhost_async_robust_cbdma_test_plan
     vhost_user_interrupt_test_plan
     vhost_user_interrupt_cbdma_test_plan
     sriov_kvm_test_plan
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V2 2/3] test_plans/vhost_async_robust_cbdma: add new testplan
  2023-03-28  1:58 [dts][PATCH V2 0/3] add vhost_async_robust_cbdma Wei Ling
  2023-03-28  1:58 ` [dts][PATCH V2 1/3] test_plans/index: add vhost_async_robust_cbdma_test_plan Wei Ling
@ 2023-03-28  1:58 ` Wei Ling
  2023-03-28  1:58 ` [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite Wei Ling
  2 siblings, 0 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  1:58 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Add new testplan for testing Vhost asynchronous data path robust with CBDMA driver.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 .../vhost_async_robust_cbdma_test_plan.rst    | 281 ++++++++++++++++++
 1 file changed, 281 insertions(+)
 create mode 100644 test_plans/vhost_async_robust_cbdma_test_plan.rst

diff --git a/test_plans/vhost_async_robust_cbdma_test_plan.rst b/test_plans/vhost_async_robust_cbdma_test_plan.rst
new file mode 100644
index 00000000..6e17ed79
--- /dev/null
+++ b/test_plans/vhost_async_robust_cbdma_test_plan.rst
@@ -0,0 +1,281 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2023 Intel Corporation
+
+=================================================
+vhost async data-path robust with cbdma test plan
+=================================================
+
+Description
+===========
+
+This document provides the test plan for testing Vhost asynchronous
+data path robust with CBDMA driver.
+
+CBDMA is a kind of DMA engine, Vhost asynchronous data path leverages DMA devices
+to offload memory copies from the CPU and it is implemented in an asynchronous way.
+As a result, large packet copy can be accelerated by the DMA engine, and vhost can
+free CPU cycles for higher level functions.
+
+Asynchronous data path is enabled per tx/rx queue, and users need
+to specify the DMA device used by the tx/rx queue. Each tx/rx queue
+only supports to use one DMA device, but one DMA device can be shared
+among multiple tx/rx queues of different vhostpmd ports.
+
+Two PMD parameters are added:
+- dmas:	specify the used DMA device for a tx/rx queue
+(Default: no queues enable asynchronous data path)
+- dma-ring-size: DMA ring size.
+(Default: 4096).
+
+Here is an example:
+--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=4096'
+
+Test case
+=========
+
+Common steps
+------------
+1. Bind 1 NIC port and CBDMA devices to vfio-pci::
+
+    <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
+    <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
+
+    For example, Bind 1 NIC port and 2 CBDMA devices::
+    ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0
+    ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1
+
+2. Send imix packets [64,1518] to NIC by traffic generator::
+
+    The TCP imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows.
+    +-------------+-------------+-------------+-------------+
+    | MAC         | MAC         | IPV4        | IPV4        |
+    | Src address | Dst address | Src address | Dst address |
+    |-------------|-------------|-------------|-------------|
+    | Random MAC  | Virtio mac  | Random IP   | Random IP   |
+    +-------------+-------------+-------------+-------------+
+    All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10.
+
+Test Case 1: PVP virtio-user quit test
+--------------------------------------
+This case is designed to test if virtio-user can quit normally regardless of whether the back-end stop sending packets.
+
+1. Bind 1 NIC port and 1 CBDMA devices to vfio-pci as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \
+	-a 0000:18:00.1 -a 0000:00:04.0 \
+	--vdev 'net_vhost0,iface=./vhost_net0,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+	--iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
+	testpmd> set fwd mac
+	testpmd> start
+
+3. Launch virtio-user with inorder mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=1 \
+	-- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
+	testpmd> set fwd csum
+	testpmd> start
+
+4. Send TCP imix packets [64,1518] from packet generator as common step2.
+
+5. Quit virtio-user and relaunch virtio-user as step 3 while sending packets from packet generator.
+
+6. Stop vhost port, then quit virtio-user and reluanch virtio-user as step 3 while sending packets from packet generator.
+
+7. Stop sending packets from packet generator, then quit virtio-user and vhost.
+
+Test Case 2: PVP vhost-user quit test
+-------------------------------------
+This case is designed to test if vhost-user can quit normally regardless of whether the back-end stop sending packets.
+
+1. Bind 1 NIC port and 1 CBDMA devices to vfio-pci as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \
+	-a 0000:18:00.1 -a 0000:00:04.0 \
+	--vdev 'net_vhost0,iface=./vhost_net0,queues=1,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+	--iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
+	testpmd> set fwd mac
+	testpmd> start
+
+3. Launch virtio-user with inorder mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,server=1 \
+	-- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
+	testpmd> set fwd csum
+	testpmd> start
+
+4. Send TCP imix packets [64,1518] from packet generator as common step2.
+
+5. Quit vhost-user and relaunch vhost-user as step 2 while sending packets from packet generator.
+
+6. Stop sending packets from packet generator, then quit vhost-user and virtio-user.
+
+Test Case 3: PVP vhost async test with redundant device parameters
+------------------------------------------------------------------
+This case is designed to test if vhostpmd can work normally when binding and using redundant device parameters.
+
+1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \
+	-a 0000:18:00.1 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
+	--vdev 'net_vhost0,iface=./vhost_net0,queues=1,client=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1]' \
+	--iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
+	testpmd> set fwd mac
+	testpmd> start
+
+3. Launch virtio-user with inorder mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,server=1 \
+	-- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
+	testpmd> set fwd csum
+	testpmd> start
+
+4. Send imix packets [64,1518] from packet generator as common step2, check the throughput.
+
+Test Case 4: Loopback vhost async test with each queue using 2 DMA devices
+--------------------------------------------------------------------------
+Since each tx/rx queue only supports to use one DMA device, this case is designed to test if vhostpmd can work normally when each queue using 2 DMA devices.
+
+1. Bind 3 CBDMA devices to vfio-pci as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \
+	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \
+	--vdev 'net_vhost0,iface=./vhost_net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.1;rxq0@0000:00:04.2]' \
+	--iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd> set fwd mac
+
+3. Launch virtio-user with inorder mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd> set fwd csum
+	testpmd> start
+
+4. Send packets from vhost-user testpmd, check the throughput::
+
+	testpmd>set txpkts 1024
+	testpmd>start tx_first 32
+	testpmd>show port stats all
+
+Test Case 5: Loopback vhost async test with dmas parameters out of order
+------------------------------------------------------------------------
+This case is designed to test if vhostpmd can work normally when dmas parameters out of order.
+
+1. Bind 2 CBDMA devices to vfio-pci as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \
+	-a 0000:00:04.0 -a 0000:00:04.1 \
+	--vdev 'net_vhost0,iface=./vhost_net0,queues=4,client=1,dmas=[rxq3@0000:00:04.1;txq0@0000:00:04.0;rxq1@0000:00:04.0;txq2@0000:00:04.1]' \
+	--iova=va -- -i --nb-cores=1 --txq=4 --rxq=4 --txd=1024 --rxd=1024
+	testpmd> set fwd mac
+
+3. Launch virtio-user with inorder mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=4,server=1 \
+	-- -i --nb-cores=1 --txq=4 --rxq=4 --txd=1024 --rxd=1024
+	testpmd> set fwd csum
+	testpmd> start
+
+4. Send packets from vhost-user testpmd, check the throughput::
+
+	testpmd>set txpkts 1024
+	testpmd>start tx_first 32
+	testpmd>show port stats all
+
+Test Case 6: VM2VM split and packed ring mergeable path with cbdma enable and server mode
+-----------------------------------------------------------------------------------------
+This case tests split and packed ring with cbdma can work normally when the front-end change from virtio-net to virtio-pmd.
+
+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.
+
+2. Launch the testpmd with 2 vhost ports below commands::
+
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
+	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \
+	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.1;rxq2@0000:00:04.2;rxq3@0000:00:04.3;rxq4@0000:00:04.4;rxq5@0000:00:04.5;rxq6@0000:00:04.6;rxq7@0000:00:04.7]' \
+	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3;rxq4@0000:80:04.4;rxq5@0000:80:04.5;rxq6@0000:80:04.6;rxq7@0000:80:04.7]' \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+	testpmd> start
+
+3. Launch VM1 and VM2::
+
+	taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \
+	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
+	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
+	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+	-chardev socket,id=char0,path=./vhost-net0,server \
+	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+	taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \
+	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
+	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+	-chardev socket,id=char0,path=./vhost-net1,server \
+	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
+
+4. On VM1, set virtio device IP and run arp protocal::
+
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
+
+5. On VM2, set virtio device IP and run arp protocal::
+
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
+
+6. Scp 1MB file form VM1 to VM2::
+
+	Under VM1, run: `scp <xxx> root@1.1.1.8:/`   <xxx> is the file name
+
+7. Check the iperf performance between two VMs by below commands::
+
+	Under VM1, run: `iperf -s -i 1`
+	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+
+8. On VM1 and VM2, bind virtio device with vfio-pci driver::
+
+	modprobe vfio
+	modprobe vfio-pci
+	echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
+	./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0
+
+9. Launch testpmd in VM1::
+
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd> set mac fwd
+	testpmd> start
+
+10. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins::
+
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd> set mac fwd
+	testpmd> set txpkts 64,256,512
+	testpmd> start tx_first 32
+	testpmd> show port stats all
+
+11. Rerun step 4-10.
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite
  2023-03-28  1:58 [dts][PATCH V2 0/3] add vhost_async_robust_cbdma Wei Ling
  2023-03-28  1:58 ` [dts][PATCH V2 1/3] test_plans/index: add vhost_async_robust_cbdma_test_plan Wei Ling
  2023-03-28  1:58 ` [dts][PATCH V2 2/3] test_plans/vhost_async_robust_cbdma: add new testplan Wei Ling
@ 2023-03-28  1:58 ` Wei Ling
  2023-03-31  3:56   ` He, Xingguang
  2023-04-11  8:48   ` lijuan.tu
  2 siblings, 2 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  1:58 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Add new testsuite for testing Vhost asynchronous data path robust with CBDMA driver.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 tests/TestSuite_vhost_async_robust_cbdma.py | 696 ++++++++++++++++++++
 1 file changed, 696 insertions(+)
 create mode 100644 tests/TestSuite_vhost_async_robust_cbdma.py

diff --git a/tests/TestSuite_vhost_async_robust_cbdma.py b/tests/TestSuite_vhost_async_robust_cbdma.py
new file mode 100644
index 00000000..fda7cdfe
--- /dev/null
+++ b/tests/TestSuite_vhost_async_robust_cbdma.py
@@ -0,0 +1,696 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+#
+
+import _thread
+import re
+import time
+
+import framework.utils as utils
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.settings import HEADER_SIZE
+from framework.test_case import TestCase
+from framework.virt_common import VM
+
+from .virtio_common import basic_common as BC
+from .virtio_common import cbdma_common as CC
+
+
+class TestVhostAsyncRobustCbdma(TestCase):
+    def set_up_all(self):
+        """
+        Run at the start of each test suite.
+        """
+        self.dut_ports = self.dut.get_ports()
+        self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing")
+        self.vm_num = 2
+        self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+        self.core_list = self.dut.get_core_list("all", self.ports_socket)
+        self.vhost_user_core = self.core_list[0:5]
+        self.virtio_user0_core = self.core_list[6:11]
+        self.out_path = "/tmp"
+        out = self.tester.send_expect("ls -d %s" % self.out_path, "# ")
+        if "No such file or directory" in out:
+            self.tester.send_expect("mkdir -p %s" % self.out_path, "# ")
+        self.base_dir = self.dut.base_dir.replace("~", "/root")
+        # create an instance to set stream field setting
+        self.pktgen_helper = PacketGeneratorHelper()
+        self.vhost_user = self.dut.new_session(suite="vhost-user")
+        self.virtio_user0 = self.dut.new_session(suite="virtio-user0")
+        self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user)
+        self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0)
+        self.path = self.dut.apps_name["test-pmd"]
+        self.testpmd_name = self.path.split("/")[-1]
+        self.virtio_mac = "00:11:22:33:44:10"
+        self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"]
+        self.CC = CC(self)
+        self.BC = BC(self)
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        self.flag = None
+        self.vm_dut = []
+        self.vm = []
+        self.dut.send_expect("rm -rf ./vhost-net*", "#")
+        self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
+        self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
+        self.CC.bind_all_cbdma_to_kernel()
+
+    @property
+    def check_2M_env(self):
+        out = self.dut.send_expect(
+            "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# "
+        )
+        return True if out == "2048" else False
+
+    def start_vhost_user_testpmd(
+        self,
+        cores,
+        param="",
+        eal_param="",
+        ports="",
+        set_fwd_mode=True,
+        exec_start=True,
+    ):
+        """
+        launch the testpmd as virtio with vhost_user
+        """
+        self.vhost_user_pmd.start_testpmd(
+            cores=cores,
+            eal_param=eal_param,
+            param=param,
+            ports=ports,
+            prefix="vhost-user",
+            fixed_prefix=True,
+        )
+        if set_fwd_mode:
+            self.vhost_user_pmd.execute_cmd("set fwd mac")
+        if exec_start:
+            self.vhost_user_pmd.execute_cmd("start")
+
+    def start_virtio_user0_testpmd(self, cores, eal_param="", param=""):
+        """
+        launch the testpmd as virtio with vhost_net0
+        """
+        if self.check_2M_env:
+            eal_param += " --single-file-segments"
+        self.virtio_user0_pmd.start_testpmd(
+            cores=cores,
+            eal_param=eal_param,
+            param=param,
+            no_pci=True,
+            prefix="virtio-user0",
+            fixed_prefix=True,
+        )
+        self.virtio_user0_pmd.execute_cmd("set fwd csum")
+        self.virtio_user0_pmd.execute_cmd("start")
+
+    def start_to_send_packets(self, duration):
+        """
+        Send imix packet with packet generator and verify
+        """
+        frame_sizes = [64, 128, 256, 512, 1024, 1518]
+        tgenInput = []
+        for frame_size in frame_sizes:
+            payload_size = frame_size - self.headers_size
+            port = self.tester.get_local_port(self.dut_ports[0])
+            fields_config = {
+                "ip": {
+                    "src": {"action": "random"},
+                },
+            }
+            pkt = Packet()
+            pkt.assign_layers(["ether", "ipv4", "tcp", "raw"])
+            pkt.config_layers(
+                [
+                    ("ether", {"dst": "%s" % self.virtio_mac}),
+                    ("ipv4", {"src": "1.1.1.1"}),
+                    ("raw", {"payload": ["01"] * int("%d" % payload_size)}),
+                ]
+            )
+            pkt.save_pcapfile(
+                self.tester,
+                "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size),
+            )
+            tgenInput.append(
+                (
+                    port,
+                    port,
+                    "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size),
+                )
+            )
+
+        self.tester.pktgen.clear_streams()
+        streams = self.pktgen_helper.prepare_stream_from_tginput(
+            tgenInput, 100, fields_config, self.tester.pktgen
+        )
+        traffic_opt = {"delay": 5, "duration": duration, "rate": 100}
+        _, self.flag = self.tester.pktgen.measure_throughput(
+            stream_ids=streams, options=traffic_opt
+        )
+
+    def calculate_avg_throughput(self, pmd, reg="Tx-pps"):
+        """
+        calculate the average throughput
+        """
+        results = 0.0
+        pmd.execute_cmd("show port stats 0", "testpmd>", 60)
+        time.sleep(5)
+        pmd.execute_cmd("show port stats 0", "testpmd>", 60)
+        for _ in range(10):
+            out = pmd.execute_cmd("show port stats 0", "testpmd>", 60)
+            time.sleep(5)
+            lines = re.search("%s:\s*(\d*)" % reg, out)
+            result = lines.group(1)
+            results += float(result)
+        Mpps = results / (1000000 * 10)
+        self.logger.info("vhost-user testpmd port 0 Tx-pps: %s" % Mpps)
+        self.verify(Mpps > 0, "port can not receive packets")
+        return Mpps
+
+    def check_packets_after_relaunch_virtio_user_testpmd(
+        self, duration, cores, eal_param="", param=""
+    ):
+        # ixia send packets times equal to duration time
+        start_time = time.time()
+        _thread.start_new_thread(self.start_to_send_packets, (duration,))
+        # wait the ixia begin to send packets
+        time.sleep(10)
+        if time.time() - start_time > duration:
+            self.logger.error(
+                "The ixia has stop to send packets, please change the delay time of ixia"
+            )
+            return False
+        # get the throughput as the expected value before relaunch the virtio-user0 testpmd
+        expected_throughput = self.calculate_avg_throughput(
+            pmd=self.vhost_user_pmd, reg="Tx-pps"
+        )
+        # quit and relaunch virtio-user0 testpmd
+        self.logger.info(
+            "quit and relaunch virtio-user0 testpmd during the pktgen sending packets"
+        )
+        self.virtio_user0_pmd.quit()
+        self.start_virtio_user0_testpmd(cores=cores, eal_param=eal_param, param=param)
+        result_throughput = self.calculate_avg_throughput(
+            pmd=self.vhost_user_pmd, reg="Tx-pps"
+        )
+        # delta value and accepted tolerance in percentage
+        delta = result_throughput - expected_throughput
+        gap = expected_throughput * -0.05
+        delta = float(delta)
+        gap = float(gap)
+        self.logger.info("Accept tolerance are (Mpps) %f" % gap)
+        self.logger.info("Throughput Difference are (Mpps) %f" % delta)
+        self.verify(
+            (result_throughput > expected_throughput + gap),
+            "result_throughput: %s is less than the expected_throughput: %s"
+            % (result_throughput, result_throughput),
+        )
+        # stop vhost-user port then quit and relaunch virtio-user0 testpmd
+        self.logger.info(
+            "stop vhost-user port then quit and relaunch virtio-user0 testpmd during the pktgen sending packets"
+        )
+        self.vhost_user_pmd.execute_cmd("stop")
+        self.virtio_user0_pmd.quit()
+        self.start_virtio_user0_testpmd(cores=cores, eal_param=eal_param, param=param)
+        self.vhost_user_pmd.execute_cmd("start")
+        # delta value and accepted tolerance in percentage
+        result_throughput = self.calculate_avg_throughput(
+            pmd=self.vhost_user_pmd, reg="Tx-pps"
+        )
+        delta = result_throughput - expected_throughput
+        gap = expected_throughput * -0.05
+        delta = float(delta)
+        gap = float(gap)
+        self.logger.info("Accept tolerance are (Mpps) %f" % gap)
+        self.logger.info("Throughput Difference are (Mpps) %f" % delta)
+        self.verify(
+            (result_throughput > expected_throughput + gap),
+            "result_throughput: %s is less than the expected_throughput: %s"
+            % (result_throughput, result_throughput),
+        )
+        # wait ixia thread exit
+        self.logger.info("wait the thread of ixia to exit")
+        while 1:
+            if self.flag is not None:
+                break
+            time.sleep(5)
+        return True
+
+    def check_packets_after_relaunch_vhost_user_testpmd(
+        self, duration, cores, eal_param="", param="", ports=""
+    ):
+        # ixia send packets times equal to duration time
+        start_time = time.time()
+        _thread.start_new_thread(self.start_to_send_packets, (duration,))
+        # wait the ixia begin to send packets
+        time.sleep(10)
+        if time.time() - start_time > duration:
+            self.logger.error(
+                "The ixia has stop to send packets, please change the delay time of ixia"
+            )
+            return False
+        # get the throughput as the expected value before relaunch the virtio-user0 testpmd
+        expected_throughput = self.calculate_avg_throughput(
+            pmd=self.vhost_user_pmd, reg="Tx-pps"
+        )
+        # quit and relaunch vhost-user testpmd
+        self.logger.info(
+            "quit and relaunch vhost-user testpmd during the pktgen sending packets"
+        )
+        self.vhost_user_pmd.quit()
+        self.start_vhost_user_testpmd(
+            cores=cores, eal_param=eal_param, param=param, ports=ports
+        )
+
+        result_throughput = self.calculate_avg_throughput(
+            pmd=self.vhost_user_pmd, reg="Tx-pps"
+        )
+        # delta value and accepted tolerance in percentage
+        delta = result_throughput - expected_throughput
+        gap = expected_throughput * -0.05
+        delta = float(delta)
+        gap = float(gap)
+        self.logger.info("Accept tolerance are (Mpps) %f" % gap)
+        self.logger.info("Throughput Difference are (Mpps) %f" % delta)
+        self.verify(
+            (result_throughput > expected_throughput + gap),
+            "result_throughput: %s is less than the expected_throughput: %s"
+            % (result_throughput, result_throughput),
+        )
+        # wait ixia thread exit
+        self.logger.info("wait the thread of ixia to exit")
+        while 1:
+            if self.flag is not None:
+                break
+            time.sleep(5)
+        return True
+
+    def start_vms(self):
+        """
+        start two VM, each VM has one virtio device
+        """
+        for i in range(self.vm_num):
+            vm_dut = None
+            vm_info = VM(self.dut, "vm%d" % i, "vhost_sample")
+            vm_params = {}
+            vm_params["driver"] = "vhost-user"
+            vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server"
+            vm_params["opt_queue"] = self.queues
+            vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1)
+            if i == 0:
+                vm_params[
+                    "opt_settings"
+                ] = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on"
+            else:
+                vm_params[
+                    "opt_settings"
+                ] = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on"
+            vm_info.set_vm_device(**vm_params)
+            try:
+                vm_dut = vm_info.start(bind_dev=False)
+                if vm_dut is None:
+                    raise Exception("Set up VM ENV failed")
+            except Exception as e:
+                print(utils.RED("Failure for %s" % str(e)))
+            self.verify(vm_dut is not None, "start vm failed")
+            self.vm_dut.append(vm_dut)
+            self.vm.append(vm_info)
+
+    def bind_dpdk_driver_in_2_vms(self):
+        for i in range(self.vm_num):
+            self.vm_dut[i].send_expect("modprobe vfio", "#")
+            self.vm_dut[i].send_expect("modprobe vfio-pci", "#")
+            self.vm_dut[i].send_expect(
+                "./usertools/dpdk-devbind.py --force --bind=vfio-pci %s"
+                % self.vm_dut[i].ports_info[0]["pci"],
+                "#",
+            )
+
+    def quit_testpmd_in_2_vms(self):
+        for i in range(self.vm_num):
+            self.vm_dut[i].send_expect("quit", "#")
+
+    def bind_kernel_driver_in_2_vms(self):
+        for i in range(self.vm_num):
+            self.vm_dut[i].send_expect(
+                "./usertools/dpdk-devbind.py --force --bind=virtio-pci %s"
+                % self.vm_dut[i].ports_info[0]["pci"],
+                "#",
+            )
+
+    def start_testpmd_in_vm(self, pmd):
+        """
+        launch the testpmd in vm
+        """
+        self.vm_cores = [1, 2]
+        param = "--tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024"
+        pmd.start_testpmd(cores=self.vm_cores, param=param)
+
+    def send_packets_from_vhost(self):
+        self.vhost_user_pmd.execute_cmd("set txpkts 1024")
+        self.vhost_user_pmd.execute_cmd("start tx_first 32")
+
+    def test_perf_pvp_virtio_user_quit(self):
+        """
+        Test Case 1: PVP virtio-user quit test
+        """
+        cdbmas = self.CC.bind_cbdma_to_dpdk(
+            cbdma_number=1, driver_name="vfio-pci", socket=self.ports_socket
+        )
+        dmas = "txq0@%s;rxq0@%s" % (cdbmas[0], cdbmas[0])
+        vhost_eal_param = (
+            "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,dmas=[%s]' --iova=va" % dmas
+        )
+        vhost_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024"
+        ports = cdbmas
+        ports.append(self.dut.ports_info[0]["pci"])
+        self.start_vhost_user_testpmd(
+            cores=self.vhost_user_core,
+            eal_param=vhost_eal_param,
+            param=vhost_param,
+            ports=ports,
+        )
+
+        virtio0_eal_param = f"--vdev=net_virtio_user0,mac={self.virtio_mac},path=./vhost-net0,mrg_rxbuf=1,in_order=1,queues=1"
+        virtio0_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024"
+        self.start_virtio_user0_testpmd(
+            cores=self.virtio_user0_core,
+            eal_param=virtio0_eal_param,
+            param=virtio0_param,
+        )
+
+        res = self.check_packets_after_relaunch_virtio_user_testpmd(
+            duration=180,
+            cores=self.virtio_user0_core,
+            eal_param=virtio0_eal_param,
+            param=virtio0_param,
+        )
+        self.verify(res is True, "Should increase the wait times of ixia")
+        self.quit_all_testpmd()
+
+    def test_perf_pvp_vhost_user_quit(self):
+        """
+        Test Case 2: PVP vhost-user quit test
+        """
+        cdbmas = self.CC.bind_cbdma_to_dpdk(
+            cbdma_number=1, driver_name="vfio-pci", socket=self.ports_socket
+        )
+        dmas = "txq0@%s;rxq0@%s" % (cdbmas[0], cdbmas[0])
+        vhost_eal_param = (
+            "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[%s]' --iova=va"
+            % dmas
+        )
+        vhost_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024"
+        ports = cdbmas
+        ports.append(self.dut.ports_info[0]["pci"])
+        self.start_vhost_user_testpmd(
+            cores=self.vhost_user_core,
+            eal_param=vhost_eal_param,
+            param=vhost_param,
+            ports=ports,
+        )
+
+        virtio0_eal_param = f"--vdev=net_virtio_user0,mac={self.virtio_mac},path=./vhost-net0,mrg_rxbuf=1,in_order=1,queues=1,server=1"
+        virtio0_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024"
+        self.start_virtio_user0_testpmd(
+            cores=self.virtio_user0_core,
+            eal_param=virtio0_eal_param,
+            param=virtio0_param,
+        )
+
+        res = self.check_packets_after_relaunch_vhost_user_testpmd(
+            duration=180,
+            cores=self.vhost_user_core,
+            eal_param=vhost_eal_param,
+            param=vhost_param,
+            ports=ports,
+        )
+        self.verify(res is True, "Should increase the wait times of ixia")
+        self.quit_all_testpmd()
+
+    def test_perf_pvp_vhost_async_test_with_redundant_device_parameters(self):
+        """
+        Test Case 3: PVP vhost async test with redundant device parameters
+        """
+        cdbmas = self.CC.bind_cbdma_to_dpdk(
+            cbdma_number=4, driver_name="vfio-pci", socket=self.ports_socket
+        )
+        dmas = "txq0@%s;rxq0@%s" % (cdbmas[1], cdbmas[1])
+        vhost_eal_param = (
+            "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[%s]' --iova=va"
+            % dmas
+        )
+        vhost_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024"
+        ports = cdbmas
+        ports.append(self.dut.ports_info[0]["pci"])
+        self.start_vhost_user_testpmd(
+            cores=self.vhost_user_core,
+            eal_param=vhost_eal_param,
+            param=vhost_param,
+            ports=ports,
+        )
+
+        virtio0_eal_param = f"--vdev=net_virtio_user0,mac={self.virtio_mac},path=./vhost-net0,mrg_rxbuf=1,in_order=1,queues=1,server=1"
+        virtio0_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024"
+        self.start_virtio_user0_testpmd(
+            cores=self.virtio_user0_core,
+            eal_param=virtio0_eal_param,
+            param=virtio0_param,
+        )
+
+        self.start_to_send_packets(duration=60)
+        Mpps = self.flag / 1000000
+        self.verify(Mpps > 0, "pktgen can't receive packets from vhost-user")
+        self.quit_all_testpmd()
+
+    def test_loopback_vhost_async_test_with_each_queue_using_2_dma_devices(self):
+        """
+        Test Case 4: Loopback vhost async test with each queue using 2 DMA devices
+        """
+        cdbmas = self.CC.bind_cbdma_to_dpdk(
+            cbdma_number=3, driver_name="vfio-pci", socket=self.ports_socket
+        )
+        dmas = "txq0@%s;txq0@%s;rxq0@%s;rxq0@%s" % (
+            cdbmas[0],
+            cdbmas[1],
+            cdbmas[1],
+            cdbmas[2],
+        )
+        vhost_eal_param = (
+            "--vdev 'net_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[%s]' --iova=va"
+            % dmas
+        )
+        vhost_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024"
+        ports = cdbmas
+        self.start_vhost_user_testpmd(
+            cores=self.vhost_user_core,
+            eal_param=vhost_eal_param,
+            param=vhost_param,
+            ports=ports,
+            exec_start=False,
+        )
+
+        virtio0_eal_param = f"--vdev=net_virtio_user0,mac={self.virtio_mac},path=./vhost-net0,mrg_rxbuf=1,in_order=1,queues=2,server=1"
+        virtio0_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024"
+        self.start_virtio_user0_testpmd(
+            cores=self.virtio_user0_core,
+            eal_param=virtio0_eal_param,
+            param=virtio0_param,
+        )
+        self.send_packets_from_vhost()
+        self.calculate_avg_throughput(pmd=self.vhost_user_pmd, reg="Tx-pps")
+        self.quit_all_testpmd()
+
+    def test_loopback_vhost_async_test_with_dmas_parameters_out_of_order(self):
+        """
+        Test Case 5: Loopback vhost async test with dmas parameters out of order
+        """
+        cdbmas = self.CC.bind_cbdma_to_dpdk(
+            cbdma_number=2, driver_name="vfio-pci", socket=self.ports_socket
+        )
+        dmas = "rxq3@%s;txq0@%s;rxq1@%s;txq2@%s" % (
+            cdbmas[1],
+            cdbmas[0],
+            cdbmas[0],
+            cdbmas[1],
+        )
+        vhost_eal_param = (
+            "--vdev 'net_vhost0,iface=./vhost-net0,queues=4,client=1,dmas=[%s]' --iova=va"
+            % dmas
+        )
+        vhost_param = "--nb-cores=1 --txq=4 --rxq=4 --txd=1024 --rxd=1024"
+        ports = cdbmas
+        self.start_vhost_user_testpmd(
+            cores=self.vhost_user_core,
+            eal_param=vhost_eal_param,
+            param=vhost_param,
+            ports=ports,
+            exec_start=False,
+        )
+
+        virtio0_eal_param = f"--vdev=net_virtio_user0,mac={self.virtio_mac},path=./vhost-net0,mrg_rxbuf=1,in_order=1,queues=4,server=1"
+        virtio0_param = "--nb-cores=1 --txq=4 --rxq=4 --txd=1024 --rxd=1024"
+        self.start_virtio_user0_testpmd(
+            cores=self.virtio_user0_core,
+            eal_param=virtio0_eal_param,
+            param=virtio0_param,
+        )
+        self.send_packets_from_vhost()
+        self.calculate_avg_throughput(pmd=self.vhost_user_pmd, reg="Tx-pps")
+        self.quit_all_testpmd()
+
+    def test_vm2vm_split_and_packed_ring_mergeable_path_with_cbdma_enable_and_server_mode(
+        self,
+    ):
+        """
+        Test Case 6: VM2VM split and packed ring mergeable path with cbdma enable and server mode
+        """
+        cdbmas = self.CC.bind_cbdma_to_dpdk(
+            cbdma_number=16, driver_name="vfio-pci", socket=-1
+        )
+        dmas1 = (
+            "txq0@%s;"
+            "txq1@%s;"
+            "txq2@%s;"
+            "txq3@%s;"
+            "txq4@%s;"
+            "txq5@%s;"
+            "rxq2@%s;"
+            "rxq3@%s;"
+            "rxq4@%s;"
+            "rxq5@%s;"
+            "rxq6@%s;"
+            "rxq7@%s"
+            % (
+                cdbmas[0],
+                cdbmas[1],
+                cdbmas[2],
+                cdbmas[3],
+                cdbmas[4],
+                cdbmas[1],
+                cdbmas[2],
+                cdbmas[3],
+                cdbmas[4],
+                cdbmas[5],
+                cdbmas[6],
+                cdbmas[7],
+            )
+        )
+        dmas2 = (
+            "txq0@%s;"
+            "txq1@%s;"
+            "txq2@%s;"
+            "txq3@%s;"
+            "txq4@%s;"
+            "txq5@%s;"
+            "rxq2@%s;"
+            "rxq3@%s;"
+            "rxq4@%s;"
+            "rxq5@%s;"
+            "rxq6@%s;"
+            "rxq7@%s"
+            % (
+                cdbmas[8],
+                cdbmas[9],
+                cdbmas[11],
+                cdbmas[12],
+                cdbmas[13],
+                cdbmas[9],
+                cdbmas[10],
+                cdbmas[11],
+                cdbmas[12],
+                cdbmas[13],
+                cdbmas[14],
+                cdbmas[15],
+            )
+        )
+
+        vhost_eal_param = (
+            "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s]' "
+            % dmas1
+            + "--vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[%s]'"
+            % dmas2
+        )
+        vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024"
+        ports = cdbmas
+        self.start_vhost_user_testpmd(
+            cores=self.vhost_user_core,
+            eal_param=vhost_eal_param,
+            param=vhost_param,
+            ports=ports,
+            set_fwd_mode=False,
+            exec_start=True,
+        )
+        self.queues = 8
+        self.start_vms()
+        self.BC.config_2_vms_combined(combined=self.queues)
+        self.BC.config_2_vms_ip()
+        self.BC.check_ping_between_2_vms()
+        self.BC.check_scp_file_between_2_vms(file_size=10)
+        self.BC.run_iperf_test_between_2_vms()
+        self.BC.check_iperf_result_between_2_vms()
+        self.bind_dpdk_driver_in_2_vms()
+        self.vm0_pmd = PmdOutput(self.vm_dut[0])
+        self.start_testpmd_in_vm(self.vm0_pmd)
+        self.vm0_pmd.execute_cmd("set fwd mac")
+        self.vm0_pmd.execute_cmd("start")
+
+        self.vm1_pmd = PmdOutput(self.vm_dut[1])
+        self.start_testpmd_in_vm(self.vm1_pmd)
+        self.vm1_pmd.execute_cmd("set fwd mac")
+        self.vm1_pmd.execute_cmd("set txpkts 64,256,512")
+        self.vm1_pmd.execute_cmd("start tx_first 32")
+        self.calculate_avg_throughput(pmd=self.vm1_pmd, reg="Rx-pps")
+
+        self.quit_testpmd_in_2_vms()
+        self.bind_kernel_driver_in_2_vms()
+
+        self.BC.config_2_vms_combined(combined=self.queues)
+        self.BC.config_2_vms_ip()
+        self.BC.check_ping_between_2_vms()
+        self.BC.check_scp_file_between_2_vms(file_size=10)
+        self.BC.run_iperf_test_between_2_vms()
+        self.BC.check_iperf_result_between_2_vms()
+        self.bind_dpdk_driver_in_2_vms()
+        self.vm0_pmd = PmdOutput(self.vm_dut[0])
+        self.start_testpmd_in_vm(self.vm0_pmd)
+        self.vm0_pmd.execute_cmd("set fwd mac")
+        self.vm0_pmd.execute_cmd("start")
+        self.vm1_pmd = PmdOutput(self.vm_dut[1])
+        self.start_testpmd_in_vm(self.vm1_pmd)
+        self.vm1_pmd.execute_cmd("set fwd mac")
+        self.vm1_pmd.execute_cmd("set txpkts 64,256,512")
+        self.vm1_pmd.execute_cmd("start tx_first 32")
+        self.calculate_avg_throughput(pmd=self.vm1_pmd, reg="Rx-pps")
+
+        self.quit_testpmd_in_2_vms()
+        self.stop_all_vms()
+        self.vhost_user_pmd.quit()
+
+    def stop_all_vms(self):
+        for i in range(len(self.vm)):
+            self.vm[i].stop()
+
+    def quit_all_testpmd(self):
+        self.virtio_user0_pmd.quit()
+        self.vhost_user_pmd.quit()
+
+    def close_all_session(self):
+        """
+        close all session of vhost an virtio
+        """
+        self.dut.close_session(self.vhost_user)
+        self.dut.close_session(self.virtio_user0)
+
+    def tear_down(self):
+        self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
+        self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
+        self.CC.bind_all_cbdma_to_kernel()
+
+    def tear_down_all(self):
+        self.close_all_session()
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite
  2023-03-28  1:58 ` [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite Wei Ling
@ 2023-03-31  3:56   ` He, Xingguang
  2023-04-11  8:48   ` lijuan.tu
  1 sibling, 0 replies; 6+ messages in thread
From: He, Xingguang @ 2023-03-31  3:56 UTC (permalink / raw)
  To: Ling, WeiX, dts; +Cc: Ling, WeiX

> -----Original Message-----
> From: Wei Ling <weix.ling@intel.com>
> Sent: Tuesday, March 28, 2023 9:58 AM
> To: dts@dpdk.org
> Cc: Ling, WeiX <weix.ling@intel.com>
> Subject: [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new
> testsuite
> 
> Add new testsuite for testing Vhost asynchronous data path robust with
> CBDMA driver.
> 
> Signed-off-by: Wei Ling <weix.ling@intel.com>
> ---

Acked-by: Xingguang He<xingguang.he@intel.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite
  2023-03-28  1:58 ` [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite Wei Ling
  2023-03-31  3:56   ` He, Xingguang
@ 2023-04-11  8:48   ` lijuan.tu
  1 sibling, 0 replies; 6+ messages in thread
From: lijuan.tu @ 2023-04-11  8:48 UTC (permalink / raw)
  To: dts, Wei Ling; +Cc: Wei Ling

On Tue, 28 Mar 2023 09:58:20 +0800, Wei Ling <weix.ling@intel.com> wrote:
> Add new testsuite for testing Vhost asynchronous data path robust with CBDMA driver.
> 
> Signed-off-by: Wei Ling <weix.ling@intel.com>


Series applied, thanks

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-04-11  8:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-28  1:58 [dts][PATCH V2 0/3] add vhost_async_robust_cbdma Wei Ling
2023-03-28  1:58 ` [dts][PATCH V2 1/3] test_plans/index: add vhost_async_robust_cbdma_test_plan Wei Ling
2023-03-28  1:58 ` [dts][PATCH V2 2/3] test_plans/vhost_async_robust_cbdma: add new testplan Wei Ling
2023-03-28  1:58 ` [dts][PATCH V2 3/3] tests/vhost_async_robust_cbdma: add new testsuite Wei Ling
2023-03-31  3:56   ` He, Xingguang
2023-04-11  8:48   ` lijuan.tu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).