test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts][PATCH V3 0/2] fix basic_4k_pages_dsa testplan and
@ 2023-03-27  5:56 Wei Ling
  2023-03-27  5:56 ` [dts][PATCH V3 1/2] test_plans/basic_4k_pages_dsa: modify the dmas parameter Wei Ling
  2023-03-27  5:56 ` [dts][PATCH V3 2/2] tests/basic_4k_pages_dsa: fix start QEMU with wrong command issue Wei Ling
  0 siblings, 2 replies; 4+ messages in thread
From: Wei Ling @ 2023-03-27  5:56 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

1.Modify the dmas parameter in testplan by the DPDK local path changed.
2.fix start QEMU with wrong command issue in testsuite.
3.Replace the new API function to create WQ or bind DSA to DPDK driver
in testsuite.

Wei Ling (2):
  test_plans/basic_4k_pages_dsa: modify the dmas parameter
  tests/basic_4k_pages_dsa: fix start QEMU with wrong command issue

 test_plans/basic_4k_pages_dsa_test_plan.rst | 376 +++++++++-----------
 tests/TestSuite_basic_4k_pages_dsa.py       | 365 +++++++++----------
 2 files changed, 352 insertions(+), 389 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts][PATCH V3 1/2] test_plans/basic_4k_pages_dsa: modify the dmas parameter
  2023-03-27  5:56 [dts][PATCH V3 0/2] fix basic_4k_pages_dsa testplan and Wei Ling
@ 2023-03-27  5:56 ` Wei Ling
  2023-03-27  5:56 ` [dts][PATCH V3 2/2] tests/basic_4k_pages_dsa: fix start QEMU with wrong command issue Wei Ling
  1 sibling, 0 replies; 4+ messages in thread
From: Wei Ling @ 2023-03-27  5:56 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Modify the dmas parameter in testplan by the DPDK local path changed.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 test_plans/basic_4k_pages_dsa_test_plan.rst | 376 +++++++++-----------
 1 file changed, 178 insertions(+), 198 deletions(-)

diff --git a/test_plans/basic_4k_pages_dsa_test_plan.rst b/test_plans/basic_4k_pages_dsa_test_plan.rst
index eeea25d8..3e656519 100644
--- a/test_plans/basic_4k_pages_dsa_test_plan.rst
+++ b/test_plans/basic_4k_pages_dsa_test_plan.rst
@@ -23,9 +23,9 @@ and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 
 .. note::
 
-   1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
-   exceed IOMMU's max capability, better to use 1G guest hugepage.
-   2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
+	1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
+	exceed IOMMU's max capability, better to use 1G guest hugepage.
+	2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
 
 Prerequisites
 =============
@@ -36,30 +36,30 @@ General set up
 
 2. Compile DPDK::
 
-	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
-	# ninja -C <dpdk build dir> -j 110
+	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
+	ninja -C <dpdk build dir> -j 110
 	For example,
 	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-3. Get the PCI device of DUT, for example, 0000:6a:00.0 is NIC port, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices::
+3. Get the PCI device of DUT, for example, 0000:29:00.0 is NIC port, 0000:6a:01.0 - 0000:6f:01.0 are DSA devices::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -s
+	./usertools/dpdk-devbind.py -s
 
 	Network devices using kernel driver
 	===================================
-	0000:6a:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
-
-	4DMA devices using kernel driver
-	4===============================
-	40000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:74:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:79:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:e7:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:ec:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:f1:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:f6:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:29:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
+
+	DMA devices using kernel driver
+	===============================
+	0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:74:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:79:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
 
 4. Prepare tmpfs with 4K-pages::
 
@@ -75,41 +75,37 @@ Common steps
 ------------
 1. Bind 1 NIC port to vfio-pci::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <nic_pci>
-	For example:
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
 
 2.Bind DSA devices to DPDK vfio-pci driver::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <dsa_pci>
-	For example, bind 2 DSA devices to vfio-pci driver:
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 .. note::
 
 	One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one and
-	eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of “max_queues” is 1~8:
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:e7:01.0,max_queues=1 -- -i
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:ec:01.0,max_queues=8 -- -i
+	eight WQ for DSA deivce 0000:6a:01.0 and 0000:6f:01.0. The value of “max_queues” is 1~8:
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:6a:01.0,max_queues=1 -- -i
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:6f:01.0,max_queues=8 -- -i
 
 3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd <dsa_pci>
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q <wq_num> <dsa_idx>
+	./usertools/dpdk-devbind.py -b idxd <dsa_pci>
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q <wq_num> <dsa_idx>
 
 .. note::
 
 	Better to reset WQ when need operate DSA devices that bound to idxd drvier:
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <dsa_idx>
+	./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <dsa_idx>
 	You can check it by 'ls /dev/dsa'
-	dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0
+	dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:6f:01.0
 	wq_num: Number of workqueues per DSA endpoint, where 1<=wq_num<=8
 
 	For example, bind 2 DSA devices to idxd driver and configure WQ:
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
+	./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3"
 
 Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver
@@ -118,21 +114,20 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:e7:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:e7:01.0-q4,lcore13@0000:e7:01.0-q5,lcore14@0000:e7:01.0-q6,lcore14@0000:e7:01.0-q7]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:29:00.0 -a 0000:6a:01.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
@@ -153,10 +148,9 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost with 1G hugepage::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:ec:01.0-q0,lcore13@0000:ec:01.0-q1,lcore14@0000:ec:01.0-q2,lcore14@0000:ec:01.0-q3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q1;txq3@0000:6a:01.0-q1;txq4@0000:6a:01.0-q2;txq5@0000:6a:01.0-q2;txq6@0000:6a:01.0-q3;txq7@0000:6a:01.0-q3;rxq0@0000:6f:01.0-q0;rxq1@0000:6f:01.0-q0;rxq2@0000:6f:01.0-q1;rxq3@0000:6f:01.0-q1;rxq4@0000:6f:01.0-q2;rxq5@0000:6f:01.0-q2;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -164,7 +158,7 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 9. Quit and relaunch virtio-user with mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
@@ -178,21 +172,20 @@ This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:f1:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f1:01.0-q4,lcore13@0000:f1:01.0-q5,lcore14@0000:f1:01.0-q6,lcore14@0000:f1:01.0-q7]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:29:00.0 -a 0000:6a:01.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -213,10 +206,9 @@ This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 7. Quit and relaunch vhost with with 1G hugepage::::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f6:01.0-q0,lcore13@0000:f6:01.0-q1,lcore14@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q1;txq3@0000:6a:01.0-q1;txq4@0000:6a:01.0-q2;txq5@0000:6a:01.0-q2;txq6@0000:6a:01.0-q3;txq7@0000:6a:01.0-q3;rxq0@0000:6f:01.0-q0;rxq1@0000:6f:01.0-q0;rxq2@0000:6f:01.0-q1;rxq3@0000:6f:01.0-q1;rxq4@0000:6f:01.0-q2;rxq5@0000:6f:01.0-q2;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -224,7 +216,7 @@ This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 9. Quit and relaunch virtio-user with mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -238,14 +230,14 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi
 
 1. Bind 1 DSA device to vfio-pci like common step 2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:6a:01.0,max_queues=2 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q0]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q1;rxq0@0000:6a:01.0-q1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2::
@@ -274,18 +266,18 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 6. Check that 2VMs can receive and send big packets to each other through vhost log. Port 0 should have tx packets above 1519, Port 1 should have rx packets above 1519::
 
@@ -293,19 +285,19 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi
 
 Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic
 -------------------------------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path 
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment.
 
 1. Bind 1 DSA device to vfio-pci like common step 2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0
- 
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0
+
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:6a:01.0,max_queues=2 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2::
@@ -334,42 +326,41 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 6. Check that 2VMs can receive and send big packets to each other through vhost log::
 
 	testpmd>show port xstats all
-        Port 0 should have tx packets above 1518
-        Port 1 should have rx packets above 1518
+	Port 0 should have tx packets above 1518
+	Port 1 should have rx packets above 1518
 
 Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver
 -------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses the asynchronous operations with dsa dpdk driver.
 And one virtio-net is split ring, the other is packed ring. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment.
 
 1. Bind 2 dsa channel to vfio-pci like common step 2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore3@0000:f6:01.0-q3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:6f:01.0-q0;txq1@0000:6f:01.0-q0;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q0;txq4@0000:6f:01.0-q1;txq5@0000:6f:01.0-q1;rxq2@0000:6f:01.0-q2;rxq3@0000:6f:01.0-q2;rxq4@0000:6f:01.0-q3;rxq5@0000:6f:01.0-q3;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Launch VM qemu::
@@ -398,24 +389,24 @@ And one virtio-net is split ring, the other is packed ring. The vhost run in 1G
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ethtool -L ens5 combined 8
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ethtool -L ens5 combined 8
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Scp 1MB file form VM1 to VM2::
 
-	<VM1># scp <file> root@1.1.1.8:/
+	scp <file> root@1.1.1.8:/
 
 7. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 8. Relaunch vm1 and rerun step 4-7.
 
@@ -428,14 +419,13 @@ dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pag
 1. Bind 2 dsa channel to vfio-pci, launch vhost::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u f1:01.0 f1:01.0
-	./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f1:01.0
-
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3]
+	./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
+
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:6f:01.0-q0;txq1@0000:6f:01.0-q0;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q0;txq4@0000:6f:01.0-q1;txq5@0000:6f:01.0-q1;rxq2@0000:6f:01.0-q2;rxq3@0000:6f:01.0-q2;rxq4@0000:6f:01.0-q3;rxq5@0000:6f:01.0-q3;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 2. Prepare tmpfs with 4K-pages::
@@ -492,20 +482,19 @@ dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pag
 
 8. Quit and relaunch vhost w/ diff dsa channels::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore4@0000:f6:01.0-q0,lcore5@0000:f6:01.0-q1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=2 -a 0000:6f:01.0,max_queues=2 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q1;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q1;rxq0@0000:6a:01.0-q0;rxq1@0000:6a:01.0-q1;rxq2@0000:6f:01.0-q0;rxq3@0000:6f:01.0-q1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q1;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q1;rxq0@0000:6a:01.0-q0;rxq1@0000:6a:01.0-q1;rxq2@0000:6f:01.0-q0;rxq3@0000:6f:01.0-q1]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
 	testpmd>start
 
 9. On VM1, set virtio device::
 
-	<VM1># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
 10. On VM2, set virtio device::
 
-	<VM2># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
 11. Rerun step 6-7.
 
@@ -515,27 +504,25 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
 	.ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
+	./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
 	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -556,10 +543,9 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost with diff dsa virtual channels and 1G-page::::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq1.1,lcore14@wq1.2]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.1;txq3@wq0.1;txq4@wq0.2;txq5@wq0.2;txq6@wq0.3;txq7@wq0.3;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.1;rxq3@wq0.1;rxq4@wq0.2;rxq5@wq0.2;rxq6@wq0.3;rxq7@wq0.3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -571,27 +557,25 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
 	.ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -612,10 +596,9 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore12@wq1.0,lcore2@wq1.1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;txq6@wq0.1;txq7@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0;rxq4@wq0.1;rxq5@wq0.1;rxq6@wq0.1;rxq7@wq0.1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -630,17 +613,17 @@ in 4k-pages environment.
 1. Bind 1 DSA device to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
 	ls /dev/dsa #check wq configure success
 
 2. Launch the Vhost sample by below commands::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0 --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@wq0.2;rxq0@wq0.3]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2 on socket 1::
@@ -669,24 +652,24 @@ in 4k-pages environment.
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 7. Check that 2VMs can receive and send big packets to each other through vhost log::
 
 	testpmd>show port xstats all
-        Port 0 should have tx packets above 1518
-        Port 1 should have rx packets above 151518
+	Port 0 should have tx packets above 1518
+	Port 1 should have rx packets above 151518
 
 Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic
 ----------------------------------------------------------------------------------------------------------
@@ -697,8 +680,8 @@ in 4k-pages environment.
 1. Bind 2 DSA device to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	ls /dev/dsa #check wq configure success
@@ -706,9 +689,9 @@ in 4k-pages environment.
 2. Launch the Vhost sample by below commands::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@wq0.0,lcore4@wq1.0]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.0]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@wq0.1;rxq0@wq0.1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2 with qemu::
@@ -737,24 +720,24 @@ in 4k-pages environment.
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 7. Check that 2VMs can receive and send big packets to each other through vhost log::
 
 	testpmd>show port xstats all
-        Port 0 should have tx packets above 1518
-        Port 1 should have rx packets above 1518
+	Port 0 should have tx packets above 1518
+	Port 1 should have rx packets above 1518
 
 Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver
 ----------------------------------------------------------------------------------------------------------
@@ -765,8 +748,8 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 1. Bind 8 DSA device to idxd like common step 3::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0 0000:74:01.0 0000:79:01.0 0000:e7:01.0 0000:ec:01.0 0000:f1:01.0 0000:f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0 0000:74:01.0 0000:79:01.0 0000:e7:01.0 0000:ec:01.0 0000:f1:01.0 0000:f6:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
@@ -780,10 +763,9 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 2. Launch vhost::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@wq0.0,lcore2@wq1.1,lcore2@wq2.2,lcore2@wq3.3,lcore3@wq0.0,lcore3@wq2.2,lcore3@wq4.4,lcore3@wq5.5,lcore3@wq6.6,lcore3@wq7.7,lcore4@wq1.1,lcore4@wq3.3,lcore4@wq0.1,lcore4@wq1.2,lcore4@wq2.3,lcore4@wq3.4,lcore4@wq4.5,lcore4@wq5.6,lcore4@wq6.7,lcore5@wq7.0]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@wq0.2;txq1@wq0.2;txq2@wq0.2;txq3@wq0.2;txq4@wq0.3;txq5@wq0.3;rxq2@wq1.2;rxq3@wq1.2;rxq4@wq1.3;rxq5@wq1.3;rxq6@wq1.3;rxq7@wq1.3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Launch VM qemu::
@@ -812,24 +794,24 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ethtool -L ens5 combined 8
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ethtool -L ens5 combined 8
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Scp 1MB file form VM1 to VM2::
 
-	<VM1># scp <file> root@1.1.1.8:/
+	scp <file> root@1.1.1.8:/
 
 7. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 8. Relaunch vm1 and rerun step 4-7.
 
@@ -842,18 +824,17 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 1. Bind 2 dsa channel to idxd, launch vhost::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
 	./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0
 	./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 1
 
 2. Launch vhost::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq1.0,lcore4@wq1.1,lcore5@wq1.2,lcore5@wq1.3]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Prepare tmpfs with 4K-pages::
@@ -911,18 +892,17 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 9. Quit and relaunch vhost w/ diff dsa channels::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \
-	--lcore-dma=[lcore2@wq0.0,lcore3@wq0.1,lcore4@wq1.0,lcore5@wq1.1]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
 	testpmd>start
 
 10. On VM1, set virtio device::
 
-	<VM1># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
 11. On VM2, set virtio device::
 
-	<VM2># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
-12. Rerun step 6-7.
+12. Rerun step 6-7.
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts][PATCH V3 2/2] tests/basic_4k_pages_dsa: fix start QEMU with wrong command issue
  2023-03-27  5:56 [dts][PATCH V3 0/2] fix basic_4k_pages_dsa testplan and Wei Ling
  2023-03-27  5:56 ` [dts][PATCH V3 1/2] test_plans/basic_4k_pages_dsa: modify the dmas parameter Wei Ling
@ 2023-03-27  5:56 ` Wei Ling
  2023-03-28  1:07   ` lijuan.tu
  1 sibling, 1 reply; 4+ messages in thread
From: Wei Ling @ 2023-03-27  5:56 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling


1.When start QEMU with multi-queues, the command must include
`mq=on,vectors=x` parameter.
2.When configure the hostname in conf/crbs.cfg will start QEMU failed,
eg: `hostfwd=tcp:hostname:6000-:22`, so use the get_host_ip() to
get the actual IP address, eg: `hostfwd=tcp:192.168.1.1:6000-:22`.
3.Replace the new API function to create WQ or bind DSA to DPDK driver
in testsuite.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 tests/TestSuite_basic_4k_pages_dsa.py | 365 ++++++++++++--------------
 1 file changed, 174 insertions(+), 191 deletions(-)

diff --git a/tests/TestSuite_basic_4k_pages_dsa.py b/tests/TestSuite_basic_4k_pages_dsa.py
index bc5840d1..cf0e4c44 100644
--- a/tests/TestSuite_basic_4k_pages_dsa.py
+++ b/tests/TestSuite_basic_4k_pages_dsa.py
@@ -13,7 +13,7 @@ from framework.packet import Packet
 from framework.pktgen import PacketGeneratorHelper
 from framework.pmd_output import PmdOutput
 from framework.qemu_kvm import QEMUKvm
-from framework.settings import CONFIG_ROOT_PATH, HEADER_SIZE
+from framework.settings import CONFIG_ROOT_PATH, HEADER_SIZE, get_host_ip
 from framework.test_case import TestCase
 
 from .virtio_common import dsa_common as DC
@@ -63,6 +63,8 @@ class TestBasic4kPagesDsa(TestCase):
         self.virtio_mac2 = "52:54:00:00:00:02"
         self.base_dir = self.dut.base_dir.replace("~", "/root")
         self.random_string = string.ascii_letters + string.digits
+        self.addr = str(self.dut.get_ip_address())
+        self.host_addr = get_host_ip(self.addr).split(":")[0]
         self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"]
         self.DC = DC(self)
 
@@ -118,13 +120,11 @@ class TestBasic4kPagesDsa(TestCase):
         self.result_table_create(self.table_header)
         self.vm_dut = []
         self.vm = []
-        self.use_dsa_list = []
-        self.DC.reset_all_work_queue()
-        self.DC.bind_all_dsa_to_kernel()
         self.packed = False
 
     def start_vm0(self, packed=False, queues=1, server=False):
         packed_param = ",packed=on" if packed else ""
+        mq_param = ",mq=on,vectors=%s" % (2 + 2 * queues) if queues > 1 else ""
         server = ",server" if server else ""
         self.qemu_cmd0 = (
             f"taskset -c {self.vm0_lcore} {self.vm0_qemu_path} -name vm0 -enable-kvm "
@@ -133,7 +133,7 @@ class TestBasic4kPagesDsa(TestCase):
             f"-chardev socket,id=char0,path=/root/dpdk/vhost-net0{server} "
             f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} "
             f"-device virtio-net-pci,netdev=netdev0,mac=%s,"
-            f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} "
+            f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param}{mq_param} "
             f"-cpu host -smp {self.vm0_lcore_smp} -m {self.vm0_mem_size} -object memory-backend-file,id=mem,size={self.vm0_mem_size}M,mem-path=/mnt/tmpfs_nohuge0,share=on "
             f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm0_image_path} "
             f"-chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial "
@@ -142,7 +142,7 @@ class TestBasic4kPagesDsa(TestCase):
 
         self.vm0_session = self.dut.new_session(suite="vm0_session")
         cmd0 = self.qemu_cmd0 % (
-            self.dut.get_ip_address(),
+            self.host_addr,
             self.virtio_mac1,
         )
         self.vm0_session.send_expect(cmd0, "# ")
@@ -153,6 +153,7 @@ class TestBasic4kPagesDsa(TestCase):
 
     def start_vm1(self, packed=False, queues=1, server=False):
         packed_param = ",packed=on" if packed else ""
+        mq_param = ",mq=on,vectors=%s" % (2 + 2 * queues) if queues > 1 else ""
         server = ",server" if server else ""
         self.qemu_cmd1 = (
             f"taskset -c {self.vm1_lcore} {self.vm1_qemu_path} -name vm1 -enable-kvm "
@@ -161,7 +162,7 @@ class TestBasic4kPagesDsa(TestCase):
             f"-chardev socket,id=char0,path=/root/dpdk/vhost-net1{server} "
             f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} "
             f"-device virtio-net-pci,netdev=netdev0,mac=%s,"
-            f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} "
+            f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param}{mq_param} "
             f"-cpu host -smp {self.vm1_lcore_smp} -m {self.vm1_mem_size} -object memory-backend-file,id=mem,size={self.vm1_mem_size}M,mem-path=/mnt/tmpfs_nohuge1,share=on "
             f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm1_image_path} "
             f"-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial "
@@ -169,7 +170,7 @@ class TestBasic4kPagesDsa(TestCase):
         )
         self.vm1_session = self.dut.new_session(suite="vm1_session")
         cmd1 = self.qemu_cmd1 % (
-            self.dut.get_ip_address(),
+            self.host_addr,
             self.virtio_mac2,
         )
         self.vm1_session.send_expect(cmd1, "# ")
@@ -181,7 +182,7 @@ class TestBasic4kPagesDsa(TestCase):
     def connect_vm0(self):
         self.vm0 = QEMUKvm(self.dut, "vm0", self.suite_name)
         self.vm0.net_type = "hostfwd"
-        self.vm0.hostfwd_addr = "%s:6000" % self.dut.get_ip_address()
+        self.vm0.hostfwd_addr = "%s:6000" % self.host_addr
         self.vm0.def_driver = "vfio-pci"
         self.vm0.driver_mode = "noiommu"
         self.wait_vm_net_ready(vm_index=0)
@@ -194,7 +195,7 @@ class TestBasic4kPagesDsa(TestCase):
     def connect_vm1(self):
         self.vm1 = QEMUKvm(self.dut, "vm1", "vm_hotplug")
         self.vm1.net_type = "hostfwd"
-        self.vm1.hostfwd_addr = "%s:6001" % self.dut.get_ip_address()
+        self.vm1.hostfwd_addr = "%s:6001" % self.host_addr
         self.vm1.def_driver = "vfio-pci"
         self.vm1.driver_mode = "noiommu"
         self.wait_vm_net_ready(vm_index=1)
@@ -509,8 +510,8 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver
         """
-        self.use_dsa_list = self.DC.bind_dsa_to_dpdk(
-            dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket
+        dsas = self.DC.bind_dsa_to_dpdk_driver(
+            dsa_num=2, driver_name="vfio-pci", socket=self.ports_socket
         )
         dmas = (
             "txq0@%s-q0;"
@@ -526,18 +527,18 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
             )
         )
         vhost_eal_param = (
@@ -549,7 +550,7 @@ class TestBasic4kPagesDsa(TestCase):
             % self.ports_socket
         )
         ports = [self.dut.ports_info[0]["pci"]]
-        for i in self.use_dsa_list:
+        for i in dsas:
             ports.append(i)
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
@@ -596,22 +597,22 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
             )
         )
 
@@ -620,11 +621,11 @@ class TestBasic4kPagesDsa(TestCase):
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8"
         ports = [self.dut.ports_info[0]["pci"]]
-        for i in self.use_dsa_list:
+        for i in dsas:
             ports.append(i)
         port_options = {
-            self.use_dsa_list[0]: "max_queues=4",
-            self.use_dsa_list[1]: "max_queues=4",
+            dsas[0]: "max_queues=4",
+            dsas[1]: "max_queues=4",
         }
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
@@ -668,8 +669,8 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 2: PVP packed ring multi-queues with 4K-pages and dsa dpdk driver
         """
-        self.use_dsa_list = self.DC.bind_dsa_to_dpdk(
-            dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket
+        dsas = self.DC.bind_dsa_to_dpdk_driver(
+            dsa_num=2, driver_name="vfio-pci", socket=self.ports_socket
         )
         dmas = (
             "txq0@%s-q0;"
@@ -685,18 +686,18 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
             )
         )
         vhost_eal_param = (
@@ -708,7 +709,7 @@ class TestBasic4kPagesDsa(TestCase):
             % self.ports_socket
         )
         ports = [self.dut.ports_info[0]["pci"]]
-        for i in self.use_dsa_list:
+        for i in dsas:
             ports.append(i)
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
@@ -755,22 +756,22 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
             )
         )
         vhost_eal_param = (
@@ -778,11 +779,11 @@ class TestBasic4kPagesDsa(TestCase):
         )
         vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8"
         ports = [self.dut.ports_info[0]["pci"]]
-        for i in self.use_dsa_list:
+        for i in dsas:
             ports.append(i)
         port_options = {
-            self.use_dsa_list[0]: "max_queues=4",
-            self.use_dsa_list[1]: "max_queues=4",
+            dsas[0]: "max_queues=4",
+            dsas[1]: "max_queues=4",
         }
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
@@ -826,17 +827,11 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic
         """
-        self.use_dsa_list = self.DC.bind_dsa_to_dpdk(
-            dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket
-        )
-        dmas1 = "txq0@%s-q0;" "rxq0@%s-q0" % (
-            self.use_dsa_list[0],
-            self.use_dsa_list[0],
-        )
-        dmas2 = "txq0@%s-q1;" "rxq0@%s-q1" % (
-            self.use_dsa_list[0],
-            self.use_dsa_list[0],
+        dsas = self.DC.bind_dsa_to_dpdk_driver(
+            dsa_num=1, driver_name="vfio-pci", socket=self.ports_socket
         )
+        dmas1 = "txq0@%s-q0;rxq0@%s-q0" % (dsas[0], dsas[0])
+        dmas2 = "txq0@%s-q1;rxq0@%s-q1" % (dsas[0], dsas[0])
         vhost_eal_param = (
             "--no-huge -m 1024 "
             + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1
@@ -846,12 +841,12 @@ class TestBasic4kPagesDsa(TestCase):
             " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d"
             % self.ports_socket
         )
-        port_options = {self.use_dsa_list[0]: "max_queues=2"}
+        port_options = {dsas[0]: "max_queues=2"}
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
             eal_param=vhost_eal_param,
             param=vhost_param,
-            ports=self.use_dsa_list,
+            ports=dsas,
             port_options=port_options,
         )
         self.vhost_user_pmd.execute_cmd("start")
@@ -874,17 +869,11 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic
         """
-        self.use_dsa_list = self.DC.bind_dsa_to_dpdk(
-            dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket
-        )
-        dmas1 = "txq0@%s-q0;" "rxq0@%s-q1" % (
-            self.use_dsa_list[0],
-            self.use_dsa_list[0],
-        )
-        dmas2 = "txq0@%s-q0;" "rxq0@%s-q1" % (
-            self.use_dsa_list[0],
-            self.use_dsa_list[0],
+        dsas = self.DC.bind_dsa_to_dpdk_driver(
+            dsa_num=1, driver_name="vfio-pci", socket=self.ports_socket
         )
+        dmas1 = "txq0@%s-q0;rxq0@%s-q1" % (dsas[0], dsas[0])
+        dmas2 = "txq0@%s-q0;rxq0@%s-q1" % (dsas[0], dsas[0])
         vhost_eal_param = (
             "--no-huge -m 1024 "
             + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1
@@ -894,12 +883,12 @@ class TestBasic4kPagesDsa(TestCase):
             " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d"
             % self.ports_socket
         )
-        port_options = {self.use_dsa_list[0]: "max_queues=2"}
+        port_options = {dsas[0]: "max_queues=2"}
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
             eal_param=vhost_eal_param,
             param=vhost_param,
-            ports=self.use_dsa_list,
+            ports=dsas,
             port_options=port_options,
         )
         self.vhost_user_pmd.execute_cmd("start")
@@ -922,8 +911,8 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver
         """
-        self.use_dsa_list = self.DC.bind_dsa_to_dpdk(
-            dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket
+        dsas = self.DC.bind_dsa_to_dpdk_driver(
+            dsa_num=2, driver_name="vfio-pci", socket=self.ports_socket
         )
         dmas1 = (
             "txq0@%s-q0;"
@@ -939,18 +928,18 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
             )
         )
         dmas2 = (
@@ -967,18 +956,18 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
             )
         )
         vhost_eal_param = (
@@ -987,14 +976,14 @@ class TestBasic4kPagesDsa(TestCase):
         )
         vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8"
         port_options = {
-            self.use_dsa_list[0]: "max_queues=4",
-            self.use_dsa_list[1]: "max_queues=4",
+            dsas[0]: "max_queues=4",
+            dsas[1]: "max_queues=4",
         }
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
             eal_param=vhost_eal_param,
             param=vhost_param,
-            ports=self.use_dsa_list,
+            ports=dsas,
             port_options=port_options,
         )
         self.vhost_user_pmd.execute_cmd("start")
@@ -1018,8 +1007,8 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 6: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa dpdk driver
         """
-        self.use_dsa_list = self.DC.bind_dsa_to_dpdk(
-            dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket
+        dsas = self.DC.bind_dsa_to_dpdk_driver(
+            dsa_num=2, driver_name="vfio-pci", socket=self.ports_socket
         )
         dmas1 = (
             "txq0@%s-q0;"
@@ -1035,18 +1024,18 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
+                dsas[0],
             )
         )
         dmas2 = (
@@ -1063,18 +1052,18 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq6@%s-q3;"
             "rxq7@%s-q3"
             % (
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
+                dsas[1],
             )
         )
         vhost_eal_param = (
@@ -1085,14 +1074,14 @@ class TestBasic4kPagesDsa(TestCase):
         )
         vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8"
         port_options = {
-            self.use_dsa_list[0]: "max_queues=4",
-            self.use_dsa_list[1]: "max_queues=4",
+            dsas[0]: "max_queues=4",
+            dsas[1]: "max_queues=4",
         }
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
             eal_param=vhost_eal_param,
             param=vhost_param,
-            ports=self.use_dsa_list,
+            ports=dsas,
             port_options=port_options,
         )
         self.vhost_user_pmd.execute_cmd("start")
@@ -1117,14 +1106,14 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq2@%s-q0;"
             "rxq3@%s-q1"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
+                dsas[0],
+                dsas[0],
+                dsas[1],
+                dsas[1],
+                dsas[0],
+                dsas[0],
+                dsas[1],
+                dsas[1],
             )
         )
         dmas2 = (
@@ -1137,14 +1126,14 @@ class TestBasic4kPagesDsa(TestCase):
             "rxq2@%s-q0;"
             "rxq3@%s-q1"
             % (
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
-                self.use_dsa_list[0],
-                self.use_dsa_list[0],
-                self.use_dsa_list[1],
-                self.use_dsa_list[1],
+                dsas[0],
+                dsas[0],
+                dsas[1],
+                dsas[1],
+                dsas[0],
+                dsas[0],
+                dsas[1],
+                dsas[1],
             )
         )
         vhost_eal_param = (
@@ -1155,14 +1144,14 @@ class TestBasic4kPagesDsa(TestCase):
         )
         vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4"
         port_options = {
-            self.use_dsa_list[0]: "max_queues=2",
-            self.use_dsa_list[1]: "max_queues=2",
+            dsas[0]: "max_queues=2",
+            dsas[1]: "max_queues=2",
         }
         self.start_vhost_user_testpmd(
             cores=self.vhost_core_list,
             eal_param=vhost_eal_param,
             param=vhost_param,
-            ports=self.use_dsa_list,
+            ports=dsas,
             port_options=port_options,
         )
         self.vhost_user_pmd.execute_cmd("start")
@@ -1180,8 +1169,7 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 7: PVP split ring multi-queues with 4K-pages and dsa kernel driver
         """
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=0)
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=1)
+        self.DC.create_wq(wq_num=4, dsa_idxs=[0, 1])
         dmas = (
             "txq0@wq0.0;"
             "txq1@wq0.0;"
@@ -1297,8 +1285,7 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 8: PVP packed ring multi-queues with 4K-pages and dsa kernel driver
         """
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=0)
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=1)
+        self.DC.create_wq(wq_num=4, dsa_idxs=[0, 1])
         dmas = (
             "txq0@wq0.0;"
             "txq1@wq0.0;"
@@ -1414,7 +1401,7 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 9: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic
         """
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=0)
+        self.DC.create_wq(wq_num=4, dsa_idxs=[0])
         dmas1 = "txq0@wq0.0;rxq0@wq0.1"
         dmas2 = "txq0@wq0.2;rxq0@wq0.3"
         vhost_eal_param = (
@@ -1452,7 +1439,7 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic
         """
-        self.DC.create_work_queue(work_queue_number=2, dsa_index=0)
+        self.DC.create_wq(wq_num=2, dsa_idxs=[0])
         dmas1 = "txq0@wq0.0;rxq0@wq0.0"
         dmas2 = "txq0@wq0.1;rxq0@wq0.1"
         vhost_eal_param = (
@@ -1490,8 +1477,7 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver
         """
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=0)
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=1)
+        self.DC.create_wq(wq_num=4, dsa_idxs=[0, 1])
         dmas1 = (
             "txq0@wq0.0;"
             "txq1@wq0.0;"
@@ -1552,8 +1538,7 @@ class TestBasic4kPagesDsa(TestCase):
         """
         Test Case 12: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa kernel driver
         """
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=0)
-        self.DC.create_work_queue(work_queue_number=4, dsa_index=1)
+        self.DC.create_wq(wq_num=4, dsa_idxs=[0])
         dmas1 = (
             "txq0@wq0.0;"
             "txq1@wq0.0;"
@@ -1660,8 +1645,6 @@ class TestBasic4kPagesDsa(TestCase):
         self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ")
         self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
         self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ")
-        self.DC.reset_all_work_queue()
-        self.DC.bind_all_dsa_to_kernel()
 
     def tear_down_all(self):
         """
-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts][PATCH V3 2/2] tests/basic_4k_pages_dsa: fix start QEMU with wrong command issue
  2023-03-27  5:56 ` [dts][PATCH V3 2/2] tests/basic_4k_pages_dsa: fix start QEMU with wrong command issue Wei Ling
@ 2023-03-28  1:07   ` lijuan.tu
  0 siblings, 0 replies; 4+ messages in thread
From: lijuan.tu @ 2023-03-28  1:07 UTC (permalink / raw)
  To: dts, Wei Ling; +Cc: Wei Ling

On Mon, 27 Mar 2023 13:56:58 +0800, Wei Ling <weix.ling@intel.com> wrote:
> 1.When start QEMU with multi-queues, the command must include
> `mq=on,vectors=x` parameter.
> 2.When configure the hostname in conf/crbs.cfg will start QEMU failed,
> eg: `hostfwd=tcp:hostname:6000-:22`, so use the get_host_ip() to
> get the actual IP address, eg: `hostfwd=tcp:192.168.1.1:6000-:22`.
> 3.Replace the new API function to create WQ or bind DSA to DPDK driver
> in testsuite.
> 
> Signed-off-by: Wei Ling <weix.ling@intel.com>


Series applied, thanks

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-03-28  1:07 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-27  5:56 [dts][PATCH V3 0/2] fix basic_4k_pages_dsa testplan and Wei Ling
2023-03-27  5:56 ` [dts][PATCH V3 1/2] test_plans/basic_4k_pages_dsa: modify the dmas parameter Wei Ling
2023-03-27  5:56 ` [dts][PATCH V3 2/2] tests/basic_4k_pages_dsa: fix start QEMU with wrong command issue Wei Ling
2023-03-28  1:07   ` lijuan.tu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).