test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts][PATCH V1 0/1] modify test plan to test vhost async
@ 2022-09-06 11:22 Xingguang He
  2022-09-06 11:22 ` [dts][PATCH V1 1/1] test_plans/pvp_vhost_dsa_test_plan: modify test plan to test vhost async dequeue Xingguang He
  0 siblings, 1 reply; 3+ messages in thread
From: Xingguang He @ 2022-09-06 11:22 UTC (permalink / raw)
  To: dts; +Cc: Xingguang He

From DPDK-22.07, vhost async dequeue is supported in both split and
packed ring, so modify pvp_vhost_dsa_test_plan to test vhost async
dequeue feature. This suite has not been automated now.

Xingguang He (1):
  test_plans/pvp_vhost_dsa_test_plan: modify test plan to test vhost
    async dequeue

 test_plans/pvp_vhost_dsa_test_plan.rst | 2411 ++++++------------------
 1 file changed, 570 insertions(+), 1841 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts][PATCH V1 1/1] test_plans/pvp_vhost_dsa_test_plan: modify test plan to test vhost async dequeue
  2022-09-06 11:22 [dts][PATCH V1 0/1] modify test plan to test vhost async Xingguang He
@ 2022-09-06 11:22 ` Xingguang He
  2022-10-10  3:09   ` lijuan.tu
  0 siblings, 1 reply; 3+ messages in thread
From: Xingguang He @ 2022-09-06 11:22 UTC (permalink / raw)
  To: dts; +Cc: Xingguang He

From DPDK-22.07, vhost async dequeue is supported in both split and
packed ring, so modify pvp_vhost_dsa_test_plan to test vhost async
dequeue feature.

Signed-off-by: Xingguang He <xingguang.he@intel.com>
---
 test_plans/pvp_vhost_dsa_test_plan.rst | 2411 ++++++------------------
 1 file changed, 570 insertions(+), 1841 deletions(-)

diff --git a/test_plans/pvp_vhost_dsa_test_plan.rst b/test_plans/pvp_vhost_dsa_test_plan.rst
index 71fb3a29..74205b79 100644
--- a/test_plans/pvp_vhost_dsa_test_plan.rst
+++ b/test_plans/pvp_vhost_dsa_test_plan.rst
@@ -29,11 +29,11 @@ IOMMU impact:
 If iommu off, idxd can work with iova=pa
 If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=pa(fwd not work due to pkts payload wrong).
 
-Note: DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd, and the suite has not yet been automated.
+..Note:
+DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
 
 Prerequisites
 =============
-
 Topology
 --------
 	Test flow: TG-->NIC-->Vhost-user-->Virtio-user-->Vhost-user-->NIC-->TG
@@ -50,19 +50,19 @@ General set up
 --------------
 1. Compile DPDK::
 
-	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
+	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
 	# ninja -C <dpdk build dir> -j 110
 	For example,
-	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
+	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
+2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:6a:00.0 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -s
-	
+
 	Network devices using kernel driver
 	===================================
-	0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
+	0000:6a:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
 
 	DMA devices using kernel driver
 	===============================
@@ -103,13 +103,13 @@ Common steps
 
 3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd <numDevices * 2>
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q <numWq>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd <numDevices>
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q <numWq>
 
 .. note::
 
 	Better to reset WQ when need operate DSA devices that bound to idxd drvier: 
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <numDevices * 2>
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <numDevices>
 	You can check it by 'ls /dev/dsa'
 	numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0
 	numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8
@@ -117,47 +117,48 @@ Common steps
 	For example, bind 2 DMA devices to idxd driver and configure WQ:
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2
-	Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq2.0 wq2.1 wq2.2 wq2.3"
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
+	Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3"
 
-4. Send imix packets [64,1518] to NIC by traffic generator::
+4. Send tcp imix packets [64,1518] to NIC by traffic generator::
 
 	The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows.
 	+-------------+-------------+-------------+-------------+
-	| MAC		 | MAC		 | IPV4		| IPV4		|
+	| MAC		  |   MAC		| IPV4		  | IPV4		|
 	| Src address | Dst address | Src address | Dst address |
 	|-------------|-------------|-------------|-------------|
 	| Random MAC  | Virtio mac  | Random IP   | Random IP   |
 	+-------------+-------------+-------------+-------------+
 	All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10.
 
-Test Case 1: PVP split ring all path vhost enqueue operations with 1:1 mapping between vrings and dsa dpdk driver channels
----------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with 1 core and 1 queue
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
+Test Case 1: PVP split ring all path multi-queues vhost async operation with 1:1 mapping between vrings and dsa dpdk driver channels
+-------------------------------------------------------------------------------------------------------------------------------------
+This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver
+and the mapping between vrings and virtual channels is 1:1. Both iova as VA and PA mode have been tested.
 
-1. Bind one dsa device(f6:01.0) and one nic port(4f:00.1) to vfio-pci like common step 1-2.
+1. Bind 2 dsa device(f1:01.0,f6:01.0) and one nic port(6a:00.0) to vfio-pci like common step 1-2::
+
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 6a:00.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@0000:f6:01.0-q0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \
+	--iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -172,87 +173,74 @@ Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2 \
+	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=2 \
+	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=2,vectorized=1 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-11. Quit all testpmd and relaunch vhost by below command::
+11. Quit all testpmd and relaunch vhost with pa mode by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=1,dmas=[txq0]' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q1]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \
+	--iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1]
 	testpmd>set fwd mac
 	testpmd>start
 
 12. Rerun step 3-6.
 
-13. Quit all testpmd and relaunch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@0000:f6:01.0-q3]
-	testpmd>set fwd csum
-	testpmd>start
-
-14. Rerun step 3-10 with csum fwd.
-
-Test Case 2: PVP split ring all path multi-queues vhost async enqueue with 1:1 mapping between vrings and dsa dpdk driver channels
------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested. 
+Test Case 2: PVP split ring all path multi-queues vhost async operations with M to 1 mapping between vrings and dsa dpdk driver channels
+-----------------------------------------------------------------------------------------------------------------------------------------
+This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver
+and the mapping between vrings and virtual channels is M:1. Both iova as VA and PA mode have been tested.
 
-1. Bind 8 dsa device(6a:01.0-f6:01.0) and one nic port(4f:00.1) to vfio-pci like common step 1-2::
+1. Bind 2 dsa device and one nic port to vfio-pci like comon step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:4f:00.1
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 f1:01.0 f6:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore12@0000:f6:01.0-q1,lcore13@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3,lcore15@0000:f6:01.0-q4,lcore16@0000:f6:01.0-q5,lcore17@0000:f6:01.0-q6,lcore18@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -267,171 +255,181 @@ Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --enable-hw-vlan-strip --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=1 -a 0000:6f:01.0,max_queues=1 \
-	-a 0000:74:01.0,max_queues=1 -a 0000:79:01.0,max_queues=1 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=1 -a 0000:f1:01.0,max_queues=1 -a 0000:f6:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q0,lcore13@0000:74:01.0-q0,lcore14@0000:79:01.0-q0,lcore15@0000:e7:01.0-q0,lcore16@0000:ec:01.0-q0,lcore17@0000:f1:01.0-q0,lcore18@0000:f6:01.0-q0]
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
-12. Rerun step 3-6 with csum fwd.
-
-13. Quit all testpmd and relaunch vhost with pa mode by below command::
+11. Quit all testpmd and relaunch vhost with pa mode by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]
-	testpmd>set fwd csum
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=1 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=pa -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f6:01.0-q0,lcore13@0000:f6:01.0-q1,lcore14@0000:f6:01.0-q2]
+	testpmd>set fwd mac
 	testpmd>start
 
-14. Rerun step 3-6 with csum fwd.
+12. Rerun step 7.
 
-Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels
------------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is M:1.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested. 
+Test Case 3: PVP split ring dynamic queues vhost async operation with dsa dpdk driver channels
+-------------------------------------------------------------------------------------------------
+This case tests if the vhost-user async operation with dsa dpdk driver can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested.
 
-1. Bind 1 dsa device and one nic port to vfio-pci like comon step 1-2::
+1. Bind 2 dsa devices and 1 NIC port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-3. Launch virtio-user with inorder mergeable path::
+3. Launch virtio-user by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
+4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
 
 	testpmd>stop
 
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
+6. Quit and relaunch vhost without dsa::
 
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
+	--iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
+	testpmd>set fwd mac
 	testpmd>start
-	testpmd>show port stats all
 
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
+7. Rerun step 4-5.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+8. Quit and relaunch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
+9. Rerun step 4-5.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+10. Quit and relaunch vhost with with diff channel by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=8 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore12@0000:f1:01.0-q4,lcore12@0000:f1:01.0-q5,lcore12@0000:f1:01.0-q6,lcore12@0000:f1:01.0-q7]
 	testpmd>set fwd mac
 	testpmd>start
 
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
+11. Rerun step 4-5.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+12. Quit and relaunch vhost with with diff channel by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore13@0000:f1:01.0-q3,lcore13@0000:f6:01.0-q0,lcore13@0000:f6:01.0-q1,lcore13@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q0,lcore14@0000:f6:01.0-q1,lcore14@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
+13. Rerun step 4-5.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+14. Quit and relaunch vhost with diff channel by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-11. Quit all testpmd and relaunch vhost with pa mode by below command::
+15. Rerun step 4-5, then start vhost port.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q3]
+16. Relaunch virtio-user by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=0,in_order=0,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd csum
 	testpmd>start
 
-12. Rerun step 6 with csum fwd.
+17. Rerun step 4-5.
 
-Test Case 4: PVP split ring all path multi-queues vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels
------------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is 1:N.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
+18. Quit and relaunch vhost with iova=pa by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;rxq1;rxq2;rxq3]' \
+	--iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q0,lcore13@0000:f1:01.0-q1,lcore14@0000:f1:01.0-q1,lcore15@0000:f1:01.0-q1]
+	testpmd>set fwd mac
+	testpmd>start
+
+19. Rerun step 4-5.
+
+Test Case 4: PVP packed ring all path mulit-queues vhost async operation with 1:1 mapping between vrings and dsa dpdk driver channels
+----------------------------------------------------------------------------------------------------------------------------------------
+This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver
+and the mapping between vrings and virtual channels is 1:1. Both iova as VA and PA mode have been tested.
 
-1. Bind 8 dsa device and one nic port to vfio-pci like cmmon step 1-2::
+1. Bind 2 dsa device(f1:01.0,f6:01.0) and one nic port(6a:00.0) to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b 4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 6a:00.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \
+	--iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=4 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -446,87 +444,82 @@ Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=4 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=4 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1  --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=4 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=4,vectorized=1 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-11. Quit all testpmd and relaunch vhost with diff channel by below command::
+11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=1 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6f:01.0-q1,lcore11@0000:74:01.0-q2,lcore11@0000:79:01.0-q3,lcore11@0000:e7:01.0-q4,lcore11@0000:ec:01.0-q5,lcore11@0000:f1:01.0-q6,lcore11@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=4,vectorized=1,queue_size=1025 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025
 	testpmd>set fwd csum
 	testpmd>start
 
-12. Rerun step 3-10 with csum fwd.
-
-13. Quit all testpmd and relaunch vhost with pa mode by below command::
+12. Quit all testpmd and relaunch vhost with pa mode by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6f:01.0-q1,lcore11@0000:74:01.0-q2,lcore11@0000:79:01.0-q3,lcore11@0000:e7:01.0-q4,lcore11@0000:ec:01.0-q5,lcore11@0000:f1:01.0-q6,lcore11@0000:f6:01.0-q7]
-	testpmd>set fwd csum
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \
+	--iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1]
+	testpmd>set fwd mac
 	testpmd>start
 
-14. Rerun step 8 with csum fwd.
+13. Rerun step 3-6.
 
-Test Case 5: PVP split ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels
------------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is M:N.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
+Test Case 5: PVP packed ring all path mulit-queues vhost async operation with M:1 mapping between vrings and dsa dpdk driver channels
+----------------------------------------------------------------------------------------------------------------------------------------
+This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver
+and the mapping between vrings and virtual channels is M:1. Both iova as VA and PA mode have been tested.
 
-1. Bind 8 dsa device and one nic port to vfio-pci like common step 1-2::
+1. Bind 2 dsa device and one nic port to vfio-pci like comon step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0	
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 f1:01.0 f6:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-12 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -541,197 +534,195 @@ Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Quit all testpmd and relaunch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-12. Rerun step 3-6.
-
-13. Quit all testpmd and relaunch vhost with diff channel by below command::
+11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6f:01.0-q1,lcore11@0000:74:01.0-q2,lcore11@0000:79:01.0-q3,lcore11@0000:e7:01.0-q4,lcore11@0000:ec:01.0-q5,lcore11@0000:f1:01.0-q6,lcore11@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1,queue_size=1025 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025
 	testpmd>set fwd csum
 	testpmd>start
 
-14. Rerun step 7 with csum.
-
-15. Quit all testpmd and relaunch vhost with pa mode by below command::
+12. Quit all testpmd and relaunch vhost with pa mode by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
-	testpmd>set fwd csum
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=1 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=pa -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f6:01.0-q0,lcore12@0000:f6:01.0-q1,lcore12@0000:f6:01.0-q2,lcore12@0000:f6:01.0-q3]
+	testpmd>set fwd mac
 	testpmd>start
 
-16. Rerun step 9 with csum fwd.
+13. Rerun step 7.
 
-Test Case 6: PVP split ring dynamic queues vhost async operation with dsa dpdk driver channels
+Test Case 6: PVP packed ring dynamic queues vhost async operation with dsa dpdk driver channels
 -------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring when vhost uses the asynchronous enqueue operations
-with dsa dpdk driver and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have beed tested.
+This case tests if the vhost-user async operation with dsa dpdk driver can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested.
 
-1. Bind 8 dsa devices and 1 NIC port to vfio-pci like common step 1-2::
+1. Bind 2 dsa devices and 1 NIC port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
 
-5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
+5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
 
-6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual channels::
+	testpmd>stop
+
+6. Quit and relaunch vhost without dsa::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
-	--iova=va -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
+	--iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
 	testpmd>start
 
-7. Rerun step 4.
-
-8. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
+7. Rerun step 4-5.
 
-9. Quit and relaunch vhost with M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
+8. Quit and relaunch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=8 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@0000:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-10. Rerun step 4.
+9. Rerun step 4-5.
 
-11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
-
-12. Quit and relaunch vhost with diff M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
+10. Quit and relaunch vhost with with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:f6:01.0-q7,lcore12@0000:6f:01.0-q1,lcore12@0000:74:01.0-q2,lcore12@0000:79:01.0-q3,lcore13@0000:74:01.0-q2,lcore13@0000:79:01.0-q3,lcore13@0000:e7:01.0-q4,lcore14@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore14@0000:e7:01.0-q4,lcore14@0000:ec:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2,lcore15@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore15@0000:ec:01.0-q5,lcore15@0000:f1:01.0-q6,lcore15@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=8 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore12@0000:f1:01.0-q4,lcore12@0000:f1:01.0-q5,lcore12@0000:f1:01.0-q6,lcore12@0000:f1:01.0-q7]
 	testpmd>set fwd mac
 	testpmd>start
 
-13. Rerun step 10-11.
+11. Rerun step 4-5.
 
-14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::
+12. Quit and relaunch vhost with with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore13@0000:f1:01.0-q3,lcore13@0000:f6:01.0-q0,lcore13@0000:f6:01.0-q1,lcore13@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q0,lcore14@0000:f6:01.0-q1,lcore14@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-15. Rerun step 10-11.
+13. Rerun step 4-5.
 
-16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::
+14. Quit and relaunch vhost with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0,max_queus=3 -a 0000:6f:01.0,max_queus=3 -a 0000:74:01.0,max_queus=3 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:f1:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-17. Rerun step 10-11.
+15. Rerun step 4-5, then start vhost port.
+
+16. Relaunch virtio-user by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+	testpmd>set fwd csum
+	testpmd>start
+
+17. Rerun step 4-5.
 
 18. Quit and relaunch vhost with iova=pa by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0,max_queus=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=pa -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;rxq1;rxq2;rxq3]' \
+	--iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
+	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore12@0000:f1:01.0-q0,lcore13@0000:f1:01.0-q1,lcore14@0000:f1:01.0-q1,lcore15@0000:f1:01.0-q1]
 	testpmd>set fwd mac
 	testpmd>start
 
-19. Rerun step 10-11.
+19. Rerun step 4-5.
+
+Test Case 7: PVP split ring all path multi-queues vhost async operation with 1:1 mapping between vrings and dsa kernel driver channels
+--------------------------------------------------------------------------------------------------------------------------------------
+This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver
+and the mapping between vrings and virtual channels is 1:1.
 
-Test Case 7: PVP packed ring all path vhost enqueue operations with 1:1 mapping between vrings and dsa dpdk driver channels
-----------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with 1 core and 
-1 queue when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels 
-is 1:1. Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
+1. Bind 1 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
 
-1. Bind one dsa device and one nic port to vfio-pci like common step 1-2::
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 f6:01.0
+	#ls /dev/dsa,check wq configure, reset if exist
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@0000:f6:01.0-q0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \
+	--iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -746,81 +737,66 @@ is 1:1. Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=2 \
+	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1,vectorized=1,,queue_size=1025 \
-	-- -i --nb-cores=1 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
-
-12. Quit all testpmd and relaunch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@0000:f6:01.0-q3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
-13. Rerun step 11 with csum fwd.
+Test Case 8: PVP split ring all path multi-queues vhost async operation with M:1 mapping between vrings and dsa kernel driver channels
+--------------------------------------------------------------------------------------------------------------------------------------
+This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver
+and the mapping between vrings and virtual channels is M:1.
 
-Test Case 8: PVP packed ring all path mulit-queues vhost async enqueue operation with 1:1 mapping between vrings and dsa dpdk driver channels
------------------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
+1. Bind 1 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
 
-1. Bind 8 dsa devices and one nic port to vfio-pci like common step 1-2::
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
+	ls /dev/dsa #check wq configure, reset if exist
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore12@0000:f6:01.0-q1,lcore13@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3,lcore15@0000:f6:01.0-q4,lcore16@0000:f6:01.0-q5,lcore17@0000:f6:01.0-q6,lcore18@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
@@ -838,1179 +814,171 @@ Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \
+	-- -i --enable-hw-vlan-strip --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
+Test Case 9: PVP split ring dynamic queues vhost async operation with dsa kernel driver channels
+--------------------------------------------------------------------------------------------------
+This case tests if the vhost-user async operation with dsa kernel driver can work normally when the queue number of split ring dynamic change.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
+1. Bind 2 dsa device to idxd driver and 1 NIC port to vfio-pci like common step 1 and 3::
 
-12. Quit all testpmd and relaunch vhost with diff channel by below command::
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]
-	testpmd>set fwd csum
-	testpmd>start
+	ls /dev/dsa #check wq configure, reset if exist
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
+	ls /dev/dsa #check wq configure success
 
-13. Rerun step 11 with csum fwd.
+2. Launch vhost by below command::
 
-14. Quit all testpmd and relaunch vhost with pa mode by below command::
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
+	testpmd>set fwd mac
+	testpmd>start
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]
-	testpmd>set fwd csum
-	testpmd>start
-
-15. Rerun step 3-6 with csum fwd.
-
-Test Case 9: PVP packed ring all path mulit-queues vhost async enqueue operation with M:1 mapping between vrings and dsa dpdk driver channels
-----------------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is M:1.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind 1 dsa device and one nic port to vfio-pci like common step 1-2::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 f1:01.0
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f1:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
-
-12. Quit all testpmd and relaunch vhost with pa mode by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f1:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q3]
-	testpmd>set fwd csum
-	testpmd>start
-
-13. Rerun step 7 with csum fwd.
-
-Test Case 10: PVP packed ring all path mulit-queues vhost async enqueue operation with 1:N mapping between vrings and dsa dpdk driver channels
------------------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is 1:N.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind 8 dsa devices and one nic port to vfio-pci like common step 1-2::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check vhost RX and TX direction both exist packtes from vhost log::
-
-	testpmd>stop
-
-6. restart vhost port and send imix pkts again, check get same throuhput as above::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
-
-12. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]
-	testpmd>set fwd csum
-	testpmd>start
-
-13. Rerun step 11 with csum fwd.
-
-14. Quit all testpmd and relaunch vhost with pa mode by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]
-	testpmd>set fwd csum
-	testpmd>start
-
-15. Rerun step 8 with csum fwd.
-
-Test Case 11: PVP packed ring all path mulit-queues vhost async enqueue operation with 1:N mapping between vrings and dsa dpdk driver channels
------------------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is M:N.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind 8 dsa devices and one nic port to vfio-pci like common step 1-2::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,queues=8,in_order=1,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,queues=8,in_order=1,packed_vq=1,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
-
-12. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
-	testpmd>set fwd mac
-	testpmd>start
-
-13. Rerun step 11.
-
-14. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore16@0000:ec:01.0-q5,lcore17@0000:f1:01.0-q6,lcore18@0000:f6:01.0-q7]
-	testpmd>set fwd csum
-	testpmd>start
-
-15. Rerun step 11 with csum fwd.
-
-16. Quit all testpmd and relaunch vhost with pa mode by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:f6:01.0,max_queues=8 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:f6:01.0-q0,lcore11@0000:f6:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3,lcore11@0000:f6:01.0-q4,lcore11@0000:f6:01.0-q5,lcore11@0000:f6:01.0-q6,lcore11@0000:f6:01.0-q7]
-	testpmd>set fwd csum
-	testpmd>start
-
-17. Rerun step 9 with csum fwd.
-
-Test Case 12: PVP packed ring dynamic queues vhost async operation with  dsa dpdk driver channels
----------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring when vhost uses the asynchronous enqueue operations
-with dsa dpdk driver and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have beed tested.
-
-1. Bind 8 dsa devices and 1 NIC port to vfio-pci like common step 1-2::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
-
-5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
-
-6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
-	--iova=va -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3]
-	testpmd>set fwd mac
-	testpmd>start
-
-7. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
-
-8. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
-
-9. Quit and relaunch vhost with M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 -a 0000:6a:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@0000:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
-
-11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
-
-12. Quit and relaunch vhost with diff M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:f6:01.0-q7,lcore12@0000:6f:01.0-q1,lcore12@0000:74:01.0-q2,lcore12@0000:79:01.0-q3,lcore13@0000:74:01.0-q2,lcore13@0000:79:01.0-q3,lcore13@0000:e7:01.0-q4,lcore14@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore14@0000:e7:01.0-q4,lcore14@0000:ec:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2,lcore15@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore15@0000:ec:01.0-q5,lcore15@0000:f1:01.0-q6,lcore15@0000:f6:01.0-q7]
-	testpmd>set fwd mac
-	testpmd>start
-
-13. Rerun step 10-11.
-
-14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]
-	testpmd>set fwd mac
-	testpmd>start
-
-15. Rerun step 10-11.
-
-16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2]
-	testpmd>set fwd mac
-	testpmd>start
-
-17. Rerun step 10-11.
-
-18. Quit and relaunch vhost with iova=pa by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=pa -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]
-	testpmd>set fwd mac
-	testpmd>start
-
-19. Rerun step 10-11.
-
-Test Case 13: PVP split ring all path vhost enqueue operations with 1:1 mapping between vrings and dsa kernel driver channels
-------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with 1 core and 1 queue
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind one dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	#ls /dev/dsa,check wq configure, reset if exist
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py --reset 0
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
-	ls /dev/dsa #check wq configure success
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@wq0.0]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Quit all testpmd and relaunch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@wq0.1]
-	testpmd>set fwd csum
-	testpmd>start
-
-12. Rerun step 3-6 with csum fwd.
-
-Test Case 14: PVP split ring all path multi-queues vhost async enqueue with 1:1 mapping between vrings and dsa kernel driver channels
---------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	#ls /dev/dsa,check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 4
-	ls /dev/dsa #check wq configure success
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3,lcore15@wq0.4,lcore16@wq0.5,lcore17@wq0.6,lcore18@wq0.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq2.0,lcore12@wq2.1,lcore13@wq2.2,lcore14@wq2.3,lcore15@wq4.0,lcore16@wq4.1,lcore17@wq4.2,lcore18@wq4.3]
-	testpmd>set fwd csum
-	testpmd>start
-
-12. Rerun step 10 with csum fwd.
-
-Test Case 15: PVP split ring all path multi-queues vhost async enqueue with M:1 mapping between vrings and dsa kernel driver channels
---------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is M:1.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind 1 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
-	ls /dev/dsa #check wq configure success
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Quit all testpmd and relaunch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.1]
-	testpmd>set fwd csum
-	testpmd>start
-
-12. Rerun step 10 with csum fwd.
-
-Test Case 16: PVP split ring all path multi-queues vhost async enqueue with 1:N mapping between vrings and dsa kernel driver channels
---------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is 1:N.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like cmmon step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
-	ls /dev/dsa #check wq configure success
-
-2.Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lcore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq2.0,lcore11@wq2.1,lcore11@wq2.2,lcore11@wq2.3,lcore11@wq4.4,lcore11@wq4.5,lcore11@wq4.6,lcore11@wq4.7]
-	testpmd>set fwd csum
-	testpmd>start
-
-12. Rerun step 3-6 with csum fwd.
-
-Test Case 17: PVP split ring all path multi-queues vhost async enqueue with M:N mapping between vrings and dsa kernel driver channels
---------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is M:N.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind 8 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
-
-   <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14
-	ls /dev/dsa #check wq configure success
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lcore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Quit all testpmd and relaunch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lcore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-12. Rerun step 3-6.
-
-13. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq2.1,lcore11@wq4.2,lcore11@wq6.3,lcore11@wq8.4,lcore11@wq10.5,lcore11@wq12.6,lcore11@wq14.7]
-	testpmd>set fwd csum
-	testpmd>start
-
-14. Rerun step 11 with csum fwd.
-
-Test Case 18: PVP split ring dynamic queues vhost async operation with dsa kernel driver channels
---------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring when vhost uses the asynchronous enqueue operations
-with dsa kernel driver and if the vhost-user can work well when the queue number dynamic change.
-
-1. Bind 8 dsa device to idxd driver and 1 NIC port to vfio-pci like common step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14
-	ls /dev/dsa #check wq configure success
-
-2. Launch vhost by below command::
-
-	 <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:4f:00.1 \
-	 --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
-	 --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
-	 testpmd>set fwd mac
-	 testpmd>start
-
-3. Launch virtio-user by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
-
-5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
-
-6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0 -a 0000:6f:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
-	--iova=va -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq2.1,lcore13@wq4.2,lcore14@wq6.3]
-	testpmd>set fwd mac
-	testpmd>start
-
-7. Rerun step 3.
-
-8. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
-
-9. Quit and relaunch vhost with M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lcore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14@wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Rerun step 3.
-
-11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
-
-12. Quit and relaunch vhost with diff M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq14.7,lcore12@wq2.1,lcore12@wq4.2,lcore12@wq6.3,lcore13@wq4.2,lcore13@wq6.3,lcore13@wq8.4,lcore14@wq4.2,lcore14@wq6.3,lcore14@wq8.4,lcore14@wq10.5,lcore15@wq0.0,lcore15@wq2.1,lcore15@wq4.2,lcore15@wq6.3,lcore15@wq8.4,lcore15@wq10.5,lcore15@wq12.6,lcore15@wq14.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-13. Rerun step 10-11.
-
-14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 -a 0000:6a:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lcore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2]
-	testpmd>set fwd mac
-	testpmd>start
-
-15. Rerun step 10-11.
-
-16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::
+3. Launch virtio-user by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq2.1,lcore13@wq4.2,lcore14@wq2.1,lcore14@wq4.2,lcore15@wq2.1,lcore15@wq4.2]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
-17. Rerun step 10-11.
-
-Test Case 19: PVP packed ring all path vhost enqueue operations with 1:1 mapping between vrings and dsa kernel driver channels
-------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with 1 core and 1 queue
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
-
-1. Bind one dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
+4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
 
-	#ls /dev/dsa,check wq configure, reset if exist
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py --reset 0
+5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
-	ls /dev/dsa #check wq configure success
+	testpmd>stop
 
-2. Launch vhost by below command::
+6. Quit and relaunch vhost without dsa::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@wq0.0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
+	--iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
 	testpmd>start
 
-3. Launch virtio-user with inorder mergeable path::
+7. Rerun step 4-5.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
+8. Quit and relaunch vhost with diff channel by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
+9. Rerun step 4-5.
 
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
+10. Quit and relaunch vhost with with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore12@wq0.1,lcore12@wq0.2,lcore12@wq0.3,lcore12@wq0.4,lcore12@wq0.5,lcore12@wq0.6,lcore12@wq0.7]
 	testpmd>set fwd mac
 	testpmd>start
 
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
+11. Rerun step 4-5.
 
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
+12. Quit and relaunch vhost with with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore12@wq0.1,lcore12@wq0.2,lcore13@wq0.3,lcore13@wq1.0,lcore13@wq1.1,lcore13@wq1.2,lcore14@wq1.0,lcore14@wq1.1,lcore14@wq1.2,lcore14@wq1.3]
 	testpmd>set fwd mac
 	testpmd>start
 
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
+13. Rerun step 4-5.
 
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
+14. Quit and relaunch vhost with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1,vectorized=1,,queue_size=1025 \
-	-- -i --nb-cores=1 --txd=1025 --rxd=1025
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
-12. Quit all testpmd and relaunch vhost by below command::
+15. Rerun step 4-5, then start vhost port
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@wq0.1]
-	testpmd>set fwd csum
+16. Relaunch virtio-user by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=0,in_order=0,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+	testpmd>set fwd mac
 	testpmd>start
 
-13. Rerun step 3-6 with csum fwd.
+17. Rerun step 4-5.
 
-Test Case 20: PVP packed ring all path multi-queues vhost async enqueue with 1:1 mapping between vrings and dsa kernel driver channels
----------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
+Test Case 10: PVP packed ring all path multi-queues vhost async operation with 1:1 mapping between vrings and dsa kernel driver channels
+-------------------------------------------------------------------------------------------------------------------------------------------
+This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver and the mapping between vrings and virtual channels is 1:1.
 
-1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
+1. Bind 2 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
 
 	#ls /dev/dsa,check wq configure, reset if exist
 	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 4
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3,lcore15@wq0.4,lcore16@wq0.5,lcore17@wq0.6,lcore18@wq0.7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \
+	--iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq1.0,lcore11@wq1.1]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -2025,89 +993,76 @@ Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=2 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2,packed_vq=1,vectorized=1 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
-
-12. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq2.0,lcore12@wq2.1,lcore13@wq2.2,lcore14@wq2.3,lcore15@wq4.0,lcore16@wq4.1,lcore17@wq4.2,lcore18@wq4.3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2,packed_vq=1,vectorized=1,queue_size=1025 \
+	-- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025
 	testpmd>set fwd csum
 	testpmd>start
 
-13. Rerun step 11 with csum fwd.
-
-Test Case 21: PVP packed ring all path multi-queues vhost async enqueue with M:1 mapping between vrings and dsa kernel driver channels
----------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is M:1.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
+Test Case 11: PVP packed ring all path multi-queues vhost async operation with M:1 mapping between vrings and dsa kernel driver channels
+------------------------------------------------------------------------------------------------------------------------------------------
+This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver and the mapping between vrings and virtual channels is M:1.
 
 1. Bind 1 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
 
 	ls /dev/dsa #check wq configure, reset if exist
 	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -2122,393 +1077,167 @@ Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
-
-12. Quit all testpmd and relaunch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.1]
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025
 	testpmd>set fwd csum
 	testpmd>start
 
-13. Rerun step 7 with csum fwd.
-
-Test Case 22: PVP packed ring all path multi-queues vhost async enqueue with 1:N mapping between vrings and dsa kernel driver channels
----------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is 1:N.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
+Test Case 12: PVP packed ring dynamic queues vhost async operation with dsa kernel driver channels
+---------------------------------------------------------------------------------------------------
+This case tests if the vhost-user async operation with dsa kernel driver can work normally when the queue number of packed ring dynamic change.
 
-1. Bind 3 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
+1. Bind 2 dsa device to idxd driver and 1 NIC port to vfio-pci like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
 	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lcore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
-3. Launch virtio-user with inorder mergeable path::
+3. Launch virtio-user by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
+4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
 
 	testpmd>stop
 
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
-
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
+6. Quit and relaunch vhost without dsa::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txd=1025 --rxd=1025
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
+	--iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
 	testpmd>start
 
-12. Quit all testpmd and relaunch vhost with diff channel by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq2.0,lcore11@wq2.1,lcore11@wq2.2,lcore11@wq2.3,lcore11@wq4.4,lcore11@wq4.5,lcore11@wq4.6,lcore11@wq4.7]
-	testpmd>set fwd csum
-	testpmd>start
-
-13. Rerun step 8 with csum fwd.
+7. Rerun step 4-5.
 
-Test Case 23: PVP packed ring all path multi-queues vhost async enqueue with M:N mapping between vrings and dsa kernel driver channels
----------------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues
-when vhost uses the asynchronous enqueue operations with dsa kernel driver and the mapping between vrings and dsa virtual channels is M:N.
-Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
+8. Quit and relaunch vhost with diff channel by below command::
 
-1. Bind 8 dsa device to idxd driver and one nic port to vfio-pci like common step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14
-	ls /dev/dsa #check wq configure success
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lcore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-3. Launch virtio-user with inorder mergeable path::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \
+	--iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
-
-	testpmd>show port stats all
-
-5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log::
-
-	testpmd>stop
-
-6. Restart vhost port and send imix packets again, then check the throuhput can get expected data::
-
-	testpmd>start
-	testpmd>show port stats all
-
-7. Relaunch virtio-user with mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
+9. Rerun step 4-5.
 
-8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6::
+10. Quit and relaunch vhost with with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore12@wq0.1,lcore12@wq0.2,lcore12@wq0.3,lcore12@wq0.4,lcore12@wq0.5,lcore12@wq0.6,lcore12@wq0.7]
 	testpmd>set fwd mac
 	testpmd>start
 
-9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
-	testpmd>start
+11. Rerun step 4-5.
 
-10. Relaunch virtio-user with vector_rx path, then repeat step 4-6::
+12. Quit and relaunch vhost with with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1,vectorized=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore12@wq0.1,lcore12@wq0.2,lcore13@wq0.3,lcore13@wq1.0,lcore13@wq1.1,lcore13@wq1.2,lcore14@wq1.0,lcore14@wq1.1,lcore14@wq1.2,lcore14@wq1.3]
 	testpmd>set fwd mac
 	testpmd>start
 
-11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1,vectorized=1,queue_size=1025 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025
-	testpmd>set fwd mac
-	testpmd>start
+13. Rerun step 4-5.
 
-12. Quit all testpmd and relaunch vhost by below command::
+14. Quit and relaunch vhost with diff channel by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq0.2,lcore11@wq0.3,lcore11@wq0.4,lcore11@wq0.5,lcore11@wq0.6,lcore11@wq0.7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
-13. Rerun step 3-6.
+15. Rerun step 4-5, then start vhost port.
 
-14. Quit all testpmd and relaunch vhost with diff channel by below command::
+16. Quit and relaunch virtio-user::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \
-	--iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq2.1,lcore11@wq4.2,lcore11@wq6.3,lcore11@wq8.4,lcore11@wq10.5,lcore11@wq12.6,lcore11@wq14.7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd csum
 	testpmd>start
 
-15. Rerun step 3-6 with csum fwd.
-
-Test Case 24: PVP packed ring dynamic queues vhost async operation with dsa kernel driver channels
----------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring when vhost uses the asynchronous enqueue
-operations with dsa kernel driver and if the vhost-user can work well when the queue number dynamic change.
-
-1. Bind 8 dsa device to idxd driver and 1 NIC port to vfio-pci like common step 1 and 3::
-
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-
-	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14
-	ls /dev/dsa #check wq configure success
-
-2. Launch vhost by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:4f:00.1 \
-	 --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
-	 --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
-	 testpmd>set fwd mac
-	 testpmd>start
-
-3. Launch virtio-user by below command::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
-	testpmd>set fwd mac
-	testpmd>start
-
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
-
-5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
-
-6. Quit and relaunch vhost with 1:1 mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \
-	--iova=va -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=4 --rxq=4 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq2.1,lcore13@wq4.2,lcore14@wq6.3]
-	testpmd>set fwd mac
-	testpmd>start
-
-7. Rerun step 3.
+17. Rerun step 4-5.
 
-8. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
-
-9. Quit and relaunch vhost with M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lcore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14@wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-10. Rerun step 3.
-
-11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
-
-12. Quit and relaunch vhost with diff M:N(1:N;M<N) mapping between vrings and dsa virtual channels::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq14.7,lcore12@wq2.1,lcore12@wq4.2,lcore12@wq6.3,lcore13@wq4.2,lcore13@wq6.3,lcore13@wq8.4,lcore14@wq4.2,lcore14@wq6.3,lcore14@wq8.4,lcore14@wq10.5,lcore15@wq0.0,lcore15@wq2.1,lcore15@wq4.2,lcore15@wq6.3,lcore15@wq8.4,lcore15@wq10.5,lcore15@wq12.6,lcore15@wq14.7]
-	testpmd>set fwd mac
-	testpmd>start
-
-13. rerun step 10-11.
-
-14. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lcore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2]
-	testpmd>set fwd mac
-	testpmd>start
-
-15. Rerun step 10-11.
-
-16. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and dsa virtual channels::::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:4f:00.1 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq2.1,lcore13@wq4.2,lcore14@wq2.1,lcore14@wq4.2,lcore15@wq2.1,lcore15@wq4.2]
-	testpmd>set fwd mac
-	testpmd>start
-
-17. Rerun step 10-11.
-
-Test Case 25: PVP split and packed ring dynamic queues vhost async operation with dsa dpdk and kernel driver channels
+Test Case 13: PVP split and packed ring dynamic queues vhost async operation with dsa dpdk and kernel driver channels
 -----------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test split ring and packed ring when vhost uses the asynchronous enqueue
-operations with both dsa dpdk driver and dsa kernel driver and if the vhost-user can work well when the queue number dynamic change.
-Both iova as VA mode have beed tested.
+This case tests if the vhost-user async operation with dsa kernel driver and dsa dpdk driver can work normally when the queue number of split ring and packed ring dynamic change.
 
 1. Bind 2 dsa device to idxd driver, 2 dsa device and 1 NIC port to vfio-pci like common step 1-3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 e7:01.0 ec:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 f1:01.0 f6:01.0
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
 	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
-	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0;txq1]' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 --lcore-dma=[lcore3@wq0.0,lcore3@wq2.0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1]' \
+	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 --lcore-dma=[lcore11@wq0.0,lcore11@wq1.0]
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -2517,18 +1246,18 @@ Both iova as VA mode have beed tested.
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
-	testpmd>set fwd mac
+	testpmd>set fwd csum
 	testpmd>start
 
-4. Send imix packets from packet generator with random ip, check perforamnce can get target.
+4. Send tcp imix packets from packet generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
 
 6. Quit and relaunch vhost as below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0;txq1;txq2;txq3]' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4 --lcore-dma=[lcore3@0000:e7:01.0-q0,lcore3@0000:e7:01.0-q1,lcore3@0000:ec:01.0-q2,lcore3@0000:ec:01.0-q3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
+	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4 --lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore11@0000:f6:01.0-q2,lcore11@0000:f6:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -2538,9 +1267,9 @@ Both iova as VA mode have beed tested.
 
 9. Quit and relaunch vhost as below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5]' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --lcore-dma=[lcore3@wq0.0,lcore3@wq2.0,lcore3@wq2.2,lcore3@0000:e7:01.0-q0,lcore3@0000:e7:01.0-q1,lcore3@0000:ec:01.0-q3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --lcore-dma=[lcore11@wq0.0,lcore11@wq1.0,lcore12@wq1.2,lcore12@0000:f1:01.0-q0,lcore13@0000:f1:01.0-q1,lcore14@0000:f6:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -2548,24 +1277,24 @@ Both iova as VA mode have beed tested.
 
 11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
-12. Quit and relaunch vhost with diff cahnnels as below command::
+12. Quit and relaunch virtio-user with packed ring mergeable in-order path by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:4f:00.1 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --lcore-dma=[lcore3@wq0.0,lcore3@wq0.1,lcore3@wq2.1,lcore3@wq2.0,lcore3@0000:e7:01.0-q1,lcore3@0000:ec:01.0-q3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
-13. Send imix packets from packet generator with random ip, check perforamnce can get target.
-
-14. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
+13. Start vhost port and rerun steps 10-11.
 
-15. Quit and relaunch virtio-user with packed ring mergeable in-order path by below command::
+14. Quit and relaunch vhost with diff cahnnels as below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore12@wq1.1,lcore13@wq1.0,lcore14@0000:f1:01.0-q1,lcore14@0000:f6:01.0-q3]
 	testpmd>set fwd mac
 	testpmd>start
 
-16. Rerun steps 10-11.
\ No newline at end of file
+15. Send imix packets from packet generator with random ip, check perforamnce can get target.
+
+16. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dts][PATCH V1 1/1] test_plans/pvp_vhost_dsa_test_plan: modify test plan to test vhost async dequeue
  2022-09-06 11:22 ` [dts][PATCH V1 1/1] test_plans/pvp_vhost_dsa_test_plan: modify test plan to test vhost async dequeue Xingguang He
@ 2022-10-10  3:09   ` lijuan.tu
  0 siblings, 0 replies; 3+ messages in thread
From: lijuan.tu @ 2022-10-10  3:09 UTC (permalink / raw)
  To: dts, Xingguang He; +Cc: Xingguang He

On Tue,  6 Sep 2022 11:22:19 +0000, Xingguang He <xingguang.he@intel.com> wrote:
> From DPDK-22.07, vhost async dequeue is supported in both split and
> packed ring, so modify pvp_vhost_dsa_test_plan to test vhost async
> dequeue feature.
> 
> Signed-off-by: Xingguang He <xingguang.he@intel.com>


Applied, thanks

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-10-10  3:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-06 11:22 [dts][PATCH V1 0/1] modify test plan to test vhost async Xingguang He
2022-09-06 11:22 ` [dts][PATCH V1 1/1] test_plans/pvp_vhost_dsa_test_plan: modify test plan to test vhost async dequeue Xingguang He
2022-10-10  3:09   ` lijuan.tu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).