test suite reviews and discussions
 help / color / mirror / Atom feed
From: Xingguang He <xingguang.he@intel.com>
To: dts@dpdk.org
Cc: Xingguang He <xingguang.he@intel.com>
Subject: [dts][PATCH V1 1/1] test_plans/loopback_virtio_user_server_mode_dsa_test_plan: modify test plan to test vhost async dequeue
Date: Tue,  6 Sep 2022 11:18:24 +0000	[thread overview]
Message-ID: <20220906111824.1135920-2-xingguang.he@intel.com> (raw)
In-Reply-To: <20220906111824.1135920-1-xingguang.he@intel.com>

From DPDK-22.07, vhost async dequeue is supported in both split and
packed ring, so modify loopback_virtio_user_server_mode_dsa_test_plan to
test vhost async dequeue feature.

Signed-off-by: Xingguang He <xingguang.he@intel.com>
---
 ..._virtio_user_server_mode_dsa_test_plan.rst | 315 +++++++++---------
 1 file changed, 159 insertions(+), 156 deletions(-)

diff --git a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst
index 8e5bdf3a..a96ce539 100644
--- a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst
@@ -10,7 +10,7 @@ Description
 
 Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way.
 In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA
-channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue operation with CBDMA channels is supported
+channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue and dequeue operation with CBDMA channels is supported
 in both split and packed ring.
 
 This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with
@@ -31,6 +31,8 @@ Note:
 exceed IOMMU's max capability, better to use 1G guest hugepage.
 2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd, and the suite has not yet been automated.
 
+Prerequisites
+=============
 Topology
 --------
 	Test flow: Vhost-user <-> Virtio-user
@@ -39,8 +41,11 @@ General set up
 --------------
 1. Compile DPDK::
 
-	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
+	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
 	# ninja -C <dpdk build dir> -j 110
+	For example:
+	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
+	ninja -C x86_64-native-linuxapp-gcc -j 110
 
 2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
@@ -82,13 +87,13 @@ Common steps
 
 2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd <numDevices * 2>
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q <numWq>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd <numDevices>
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q <numWq>
 
 .. note::
 
 	Better to reset WQ when need operate DSA devices that bound to idxd drvier:
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <numDevices * 2>
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <numDevices>
 	You can check it by 'ls /dev/dsa'
 	numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0
 	numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8
@@ -96,24 +101,24 @@ Common steps
 	For example, bind 2 DMA devices to idxd driver and configure WQ:
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2
-	Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq2.0 wq2.1 wq2.2 wq2.3"
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
+	Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3"
 
-Test Case 1: loopback split ring server mode large chain packets stress test with dsa dpdk driver
+Test Case 1: Loopback split ring server mode large chain packets stress test with dsa dpdk driver
 ---------------------------------------------------------------------------------------------------
 This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode 
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
 1. Bind 1 dsa device to vfio-pci like common step 1::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:f6:01.0,max_queues=1 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \
-	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f6:01.0-q0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=1 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \
+	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0]
 
 3. launch virtio and start testpmd::
 
@@ -130,30 +135,30 @@ when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both i
 
 5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:f6:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \
-	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f6:01.0-q0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \
+	--iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0]
 
-6. rerun step 4.
+6. Rerun step 4.
 
-Test Case 2: loopback packed ring server mode large chain packets stress test with dsa dpdk driver
+Test Case 2: Loopback packed ring server mode large chain packets stress test with dsa dpdk driver
 ----------------------------------------------------------------------------------------------------
 This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode
-when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
 1. Bind 1 dsa port to vfio-pci as common step 1::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:6f:01.0,max_queues=1 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \
-	--iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:6f:01.0-q0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \
+	--iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0]
 
 3. launch virtio and start testpmd::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4  --file-prefix=testpmd0 --no-pci  \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4  --file-prefix=testpmd0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1 \
 	-- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1
 	testpmd>start
@@ -166,35 +171,36 @@ when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both i
 
 5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:6f:01.0,max_queues=1 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \
-	--iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:6f:01.0-q0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \
+	--iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0]
 
-6. rerun step 3.
+6. Rerun step 4.
 
-Test Case 3: loopback split ring all path server mode and multi-queues payload check with dsa dpdk driver
+Test Case 3: Loopback split ring all path server mode and multi-queues payload check with dsa dpdk driver
 -----------------------------------------------------------------------------------------------------------
 This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring
-all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver.
+Both iova as VA and PA mode test.
 
 1. bind 3 dsa port to vfio-pci like common step 1::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0
-	./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0
+	./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0,max_queues=4 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
-	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2]
 
 3. Launch virtio-user with split ring mergeable inorder path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -219,9 +225,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 8. Quit and relaunch virtio with split ring mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -229,9 +235,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 10. Quit and relaunch virtio with split ring non-mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --enable-hw-vlan-strip --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -252,9 +258,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 15. Quit and relaunch virtio with split ring inorder non-mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -262,9 +268,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 17. Quit and relaunch virtio with split ring vectorized path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -272,45 +278,45 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 19. Quit and relaunch vhost with diff channel::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
 	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6f:01.0-q1,lcore13@0000:74:01.0-q2,lcore14@0000:6f:01.0-q1,lcore14@0000:74:01.0-q2,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2]
+	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:ec:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:ec:01.0-q1,lcore14@0000:f1:01.0-q2,lcore15@0000:ec:01.0-q1,lcore15@0000:f1:01.0-q2]
 
 20. Rerun steps 11-14.
 
 21. Quit and relaunch vhost w/ iova=pa::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
 	--iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore12@0000:6a:01.0-q0,lcore13@0000:6a:01.0-q1,lcore13@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q1,lcore14@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2]
+	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2,lcore15@0000:e7:01.0-q1,lcore15@0000:e7:01.0-q2]
 
 22. Rerun steps 11-14.
 
-Test Case 4: loopback packed ring all path server mode and multi-queues payload check with dsa dpdk driver
+Test Case 4: Loopback packed ring all path server mode and multi-queues payload check with dsa dpdk driver
 ------------------------------------------------------------------------------------------------------------
 This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
-all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
+all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
-1. bind 8 dsa port to vfio-pci like common step 1::
+1. bind 2 dsa port to vfio-pci like common step 1::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
+	./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@0000:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q3]
 
 3. Launch virtio-user with packed ring mergeable inorder path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -335,9 +341,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 8. Quit and relaunch virtio with packed ring mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -345,9 +351,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 10. Quit and relaunch virtio with packed ring non-mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -368,9 +374,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -378,9 +384,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 17. Quit and relaunch virtio with packed ring vectorized path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -388,9 +394,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -398,39 +404,39 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 21. Quit and relaunch vhost with diff channel::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 -a 0000:79:01.0 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:f6:01.0-q7,lcore12@0000:6f:01.0-q1,lcore12@0000:74:01.0-q2,lcore12@0000:79:01.0-q3,lcore13@0000:74:01.0-q2,lcore13@0000:79:01.0-q3,lcore13@0000:e7:01.0-q4,lcore14@0000:74:01.0-q2,lcore14@0000:79:01.0-q3,lcore14@0000:e7:01.0-q4,lcore14@0000:ec:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6f:01.0-q1,lcore15@0000:74:01.0-q2,lcore15@0000:79:01.0-q3,lcore15@0000:e7:01.0-q4,lcore15@0000:ec:01.0-q5,lcore15@0000:f1:01.0-q6,lcore15@0000:f6:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:ec:01.0-q1]
 
 22. Rerun steps 11-14.
 
 23. Quit and relaunch vhost w/ iova=pa::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:6a:01.0-q0,lcore11@0000:6a:01.0-q7,lcore12@0000:6a:01.0-q1,lcore12@0000:6a:01.0-q2,lcore12@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q2,lcore13@0000:6a:01.0-q3,lcore13@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q2,lcore14@0000:6a:01.0-q3,lcore14@0000:6a:01.0-q4,lcore14@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q0,lcore15@0000:6a:01.0-q1,lcore15@0000:6a:01.0-q2,lcore15@0000:6a:01.0-q3,lcore15@0000:6a:01.0-q4,lcore15@0000:6a:01.0-q5,lcore15@0000:6a:01.0-q6,lcore15@0000:6a:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@0000:e7:01.0-q1,lcore11@0000:e7:01.0-q3]
 
-24. Rerun steps 3-6.
+24. Rerun steps 11-14.
 
-Test Case 5: loopback split ring server mode large chain packets stress test with dsa kernel driver
+Test Case 5: Loopback split ring server mode large chain packets stress test with dsa kernel driver
 ---------------------------------------------------------------------------------------------------
 This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode
-when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver.
 
 1. Bind 1 dsa device to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
 	./usertools/dpdk-devbind.py -u 6a:01.0
 	./usertools/dpdk-devbind.py -b idxd 6a:01.0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --no-pci \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \
 	--iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.2]
 
 3. launch virtio and start testpmd::
@@ -446,23 +452,23 @@ when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 	testpmd>start tx_first 32
 	testpmd>show port stats all
 
-Test Case 6: loopback packed ring server mode large chain packets stress test with dsa kernel driver
+Test Case 6: Loopback packed ring server mode large chain packets stress test with dsa kernel driver
 -----------------------------------------------------------------------------------------------------
 This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode
-when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+when vhost uses the asynchronous operations with dsa kernel driver.
 
 1. Bind 1 dsa port to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
 	./usertools/dpdk-devbind.py -u 6a:01.0
 	./usertools/dpdk-devbind.py -b idxd 6a:01.0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --no-pci \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \
 	--iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.0]
 
 3. launch virtio and start testpmd::
@@ -478,25 +484,24 @@ when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 	testpmd>start tx_first 32
 	testpmd>show port stats all
 
-Test Case 7: loopback split ring all path server mode and multi-queues payload check with dsa kernel driver
+Test Case 7: Loopback split ring all path server mode and multi-queues payload check with dsa kernel driver
 -------------------------------------------------------------------------------------------------------------
 This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring
-all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver.
 
-1. bind 3 dsa port to idxd like common step 2::
+1. bind 2 dsa port to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0
-	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
+	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
 	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
 	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lcore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2]
 
@@ -520,6 +525,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 	testpmd> set txpkts 64,64,64,2000,2000,2000
 	testpmd> set burst 1
 	testpmd> start tx_first 1
+	testpmd> show port stats all
 	testpmd> stop
 
 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same.
@@ -552,6 +558,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 	testpmd> set txpkts 64,128,256,512
 	testpmd> set burst 1
 	testpmd> start tx_first 1
+	testpmd> show port stats all
 	testpmd> stop
 
 13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same.
@@ -580,45 +587,39 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 19. Quit and relaunch vhost with diff channel::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:6a:01.0 -a 0000:6f:01.0 -a 0000:74:01.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
 	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq2.1,lcore13@wq4.2,lcore14@wq2.1,lcore14@wq4.2,lcore15@wq2.1,lcore15@wq4.2]
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq0.1,lcore14@wq1.0,lcore15@wq0.1,lcore15@wq1.0]
 
 20. Rerun steps 11-14.
 
-Test Case 8: loopback packed ring all path server mode and multi-queues payload check with dsa kernel driver
+Test Case 8: Loopback packed ring all path server mode and multi-queues payload check with dsa kernel driver
 -------------------------------------------------------------------------------------------------------------
 This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
-all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver.
 
 1. bind 8 dsa port to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 8
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 10
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 12
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 14
+	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.7,lcore12@wq0.1,lcore12@wq0.2,lcore12@wq0.3,lcore13@wq0.2,lcore13@wq0.3,lcore13@wq0.4,lcore14@wq0.2,lcore14@wq0.3,lcore14@wq0.4,lcore14@wq0.5,lcore15@wq0.0,lcore15@wq0.1,lcore15@wq0.2,lcore15@wq0.3,lcore15@wq0.4,lcore15@wq0.5,lcore15@wq0.6,lcore15@wq0.7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 
 3. Launch virtio-user with packed ring mergeable inorder path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -634,6 +635,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 	testpmd> set txpkts 64,64,64,2000,2000,2000
 	testpmd> set burst 1
 	testpmd> start tx_first 1
+	testpmd> show port stats all
 	testpmd> stop
 
 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same.
@@ -642,9 +644,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 8. Quit and relaunch virtio with packed ring mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -652,9 +654,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 10. Quit and relaunch virtio with packed ring non-mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -666,6 +668,7 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 	testpmd> set txpkts 64,128,256,512
 	testpmd> set burst 1
 	testpmd> start tx_first 1
+	testpmd> show port stats all
 	testpmd> stop
 
 13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same.
@@ -674,9 +677,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -684,9 +687,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 17. Quit and relaunch virtio with packed ring vectorized path as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -694,9 +697,9 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025
+	-- -i --nb-cores=2 --rxq=8 --txq=8 --txd=1025 --rxd=1025
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -704,36 +707,34 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue
 
 21. Quit and relaunch vhost with diff channel::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq14.7,lcore12@wq2.1,lcore12@wq4.2,lcore12@wq6.3,lcore13@wq4.2,lcore13@wq6.3,lcore13@wq8.4,lcore14@wq4.2,lcore14@wq6.3,lcore14@wq8.4,lcore14@wq10.5,lcore15@wq0.0,lcore15@wq2.1,lcore15@wq4.2,lcore15@wq6.3,lcore15@wq8.4,lcore15@wq10.5,lcore15@wq12.6,lcore15@wq14.7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
+	--lcore-dma=[lcore11@wq0.0,lcore11@wq1.0,lcore12@wq0.1,lcore12@wq1.1,lcore13@wq0.2,lcore13@wq1.2,lcore14@wq0.3,lcore14@wq1.3]
 
 22. Rerun steps 3-6.
 
-Test Case 9: loopback split and packed ring server mode multi-queues and mergeable path payload check with dsa dpdk and kernel driver
+Test Case 9: Loopback split and packed ring server mode multi-queues and mergeable path payload check with dsa dpdk and kernel driver
 --------------------------------------------------------------------------------------------------------------------------------------
 This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split and packed ring
-multi-queues with server mode when vhost uses the asynchronous enqueue operations with dsa kernel driver.
+multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver.
 
-1. bind 4 dsa device to idxd and 2 dsa device to vfio-pci like common step 1-2::
+1. bind 2 dsa device to idxd and 2 dsa device to vfio-pci like common step 1-2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0
-	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0
-	./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 2
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 4
-	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 6
+	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 f1:01.0 f6:01.0
+	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 f1:01.0 f6:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --file-prefix=vhost -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
 	--iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore3@wq0.0,lcore3@wq2.0,lcore3@wq4.0,lcore3@wq6.0,lcore3@0000:e7:01.0-q0,lcore3@0000:e7:01.0-q2,lcore3@0000:ec:01.0-q3]
+	--lcore-dma=[lcore3@wq0.0,lcore3@wq0.1,lcore3@wq1.0,lcore3@wq1.1,lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q2,lcore3@0000:f6:01.0-q3]
 
 3. Launch virtio-user with split ring mergeable inorder path::
 
@@ -751,10 +752,12 @@ multi-queues with server mode when vhost uses the asynchronous enqueue operation
 
 5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets::
 
-	 testpmd>set fwd csum
-	 testpmd>set txpkts 64,64,64,2000,2000,2000
-	 testpmd>set burst 1
-	 testpmd>start tx_first 1
+	testpmd>set fwd csum
+	testpmd>set txpkts 64,64,64,2000,2000,2000
+	testpmd>set burst 1
+	testpmd>start tx_first 1
+	testpmd>show port stats all
+	testpmd>stop
 
 6. Quit pdump and chcek all the packets length is 6192 and the payload of all packets are same in the pcap file.
 
@@ -788,4 +791,4 @@ multi-queues with server mode when vhost uses the asynchronous enqueue operation
 	testpmd>set fwd csum
 	testpmd>start
 
-13. Stop vhost and rerun step 4-7.
\ No newline at end of file
+13. Stop vhost and rerun step 4-7.
-- 
2.25.1


  reply	other threads:[~2022-09-06 11:18 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-06 11:18 [dts][PATCH V1 0/1] modify test plan to test vhost async Xingguang He
2022-09-06 11:18 ` Xingguang He [this message]
2022-10-09 10:10   ` [dts][PATCH V1 1/1] test_plans/loopback_virtio_user_server_mode_dsa_test_plan: modify test plan to test vhost async dequeue lijuan.tu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220906111824.1135920-2-xingguang.he@intel.com \
    --to=xingguang.he@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).