test suite reviews and discussions
 help / color / mirror / Atom feed
From: Wei Ling <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Wei Ling <weix.ling@intel.com>
Subject: [dts][PATCH V2 1/2] test_plans/vm2vm_virtio_user_cbdma_test_plan: modify the dmas parameter by DPDK changed
Date: Tue, 22 Nov 2022 16:42:25 +0800	[thread overview]
Message-ID: <20221122084225.2897601-1-weix.ling@intel.com> (raw)

From DPDK-22.11, the dmas parameter have changed, so modify the dmas
parameter in the testplan.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 .../vm2vm_virtio_user_cbdma_test_plan.rst     | 943 +++++++++---------
 1 file changed, 446 insertions(+), 497 deletions(-)

diff --git a/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst
index caf1a96b..45a1761b 100644
--- a/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst
+++ b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst
@@ -2,17 +2,19 @@
    Copyright(c) 2022 Intel Corporation
 
 
-==================================================
+=================================================
 VM2VM vhost-user/virtio-user with CBDMA test plan
-==================================================
+=================================================
 
 Description
 ===========
 
-Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way.
-In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA
-channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with
-CBDMA channels is supported in both split and packed ring.
+CBDMA is a kind of DMA engine, Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU
+and it is implemented in an asynchronous way. As a result, large packet copy can be accelerated by the DMA engine, and
+vhost can free CPU cycles for higher level functions. In addition, vhost supports M:N mapping between vrings and DMA
+virtual channels. Specifically, one vring can use multiple different DMA channels and one DMA channel can be shared by
+multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with CBDMA channels is supported
+in both split and packed ring.
 
 This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with
 CBDMA channels in VM2VM virtio-user topology.
@@ -23,15 +25,21 @@ For example, the split ring mergeable inorder path use non-indirect descriptor,
 still need one ring put header.
 The split ring mergeable path use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring.
 
-Note:
-1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
-exceed IOMMU's max capability, better to use 1G guest hugepage.
-2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
+.. note::
 
-For more about dpdk-testpmd sample, please refer to the DPDK docments:
-https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html
-For virtio-user vdev parameter, you can refer to the DPDK docments:
-https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.
+   1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
+   exceed IOMMU's max capability, better to use 1G guest hugepage.
+   2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch,
+   we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify
+   the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the
+   implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports.
+
+   Two PMD parameters are added:
+   - dmas:	specify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path)
+   - dma-ring-size: DMA ring size.(Default: 4096).
+
+   Here is an example:
+   --vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=2048'
 
 Prerequisites
 =============
@@ -78,25 +86,23 @@ Common steps
 
 Test Case 1: VM2VM split ring non-mergeable path multi-queues payload check with cbdma enable
 ---------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+split ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 2 CBDMA devices to vfio-pci, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq1@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
-    --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
-    -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 6-8 --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
+    -- -i --enable-hw-vlan-strip --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -104,36 +110,34 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 4. Launch virtio-user0 and send packets::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
-    -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set burst 1
-    testpmd>set txpkts 64,128,256,512
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set txpkts 64
-    testpmd>start tx_first 1
-    testpmd>stop
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 3-5 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
+    -- -i --enable-hw-vlan-strip --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096
+    testpmd> set burst 1
+    testpmd> set txpkts 64,128,256,512
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set txpkts 64
+    testpmd> start tx_first 1
+    testpmd> stop
 
 5. Start vhost testpmd, check virtio-user1 RX-packets is 566 and RX-bytes is 486016, 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
-	--vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-	--iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-	--lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.1;rxq1@0000:00:04.1]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;rxq0@0000:00:04.3;rxq1@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -141,40 +145,38 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 1
-    testpmd>set txpkts 64,128,256,512
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set txpkts 64
-    testpmd>start tx_first 1
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,128,256,512
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set txpkts 64
+    testpmd> start tx_first 1
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 2: VM2VM split ring inorder non-mergeable path multi-queues payload check with cbdma enable
 -----------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder non-mergeable path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+split ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 4 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq1@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -183,35 +185,33 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 4. Launch virtio-user0 and send packets::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set burst 1
-    testpmd>set txpkts 64
-	testpmd>start tx_first 27
-	testpmd>stop
-    testpmd>set burst 32
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set txpkts 64,256,2000,64,256,2000
-	testpmd>start tx_first 1
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 1
+    testpmd> stop
 
 5. Start vhost testpmd, check 502 packets and 32128 bytes received by virtio-user1 and 502 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.1;rxq1@0000:00:04.1]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;rxq0@0000:00:04.3;rxq1@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -219,39 +219,37 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 1
-	testpmd>start tx_first 27
-	testpmd>stop
-    testpmd>set burst 32
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set txpkts 64,256,2000,64,256,2000
-	testpmd>start tx_first 1
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 1
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 3: VM2VM split ring vectorized path multi-queues payload check with cbdma enable
 ------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+split ring vectorized path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 8 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq1@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -260,34 +258,32 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 4. Launch virtio-user0 and send packets::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,,vectorized=1,queue_size=4096 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,,vectorized=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 5. Start vhost testpmd, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.4;txq1@0000:00:04.5;rxq0@0000:00:04.6;rxq1@0000:00:04.7]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -295,40 +291,37 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 4: VM2VM split ring inorder mergeable path test non-indirect descriptor with cbdma enable
 ---------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both
-iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding
+in vhost-user/virtio-user split ring inorder mergeable path and multi-queues when vhost uses the asynchronous
+operations with CBDMA channels.
 
-1. Launch testpmd by below command::
+1. Bind 8 CBDMA devices to vfio-pci, launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
-    --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -336,56 +329,52 @@ iova as VA and PA mode test.
 
 4. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets)::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-    testpmd>set burst 1
-    testpmd>set txpkts 64
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set txpkts 2000,2000,2000,2000
-    testpmd>start tx_first 1
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set txpkts 2000,2000,2000,2000
+    testpmd> start tx_first 1
+    testpmd> stop
 
 5. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the non-direct descriptors, the 8k length pkt will occupies 5 ring:2000,2000,2000,2000 will need 4 consequent ring,
 still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap.
 
-6. Relaunch vhost with iova=pa by below command::
+6. Relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.4;txq1@0000:00:04.5;rxq0@0000:00:04.6;rxq1@0000:00:04.7]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx
 
 7. Rerun step 2-5.
 
 Test Case 5: VM2VM split ring mergeable path test indirect descriptor with cbdma enable
 ---------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid and indirect descriptor after packets
+forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost
+uses the asynchronous operations with CBDMA channels.
 
-1. Launch testpmd by below command::
+1. Bind 8 CBDMA devices to vfio-pci, launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.1;txq1@0000:00:04.1;rxq0@0000:00:04.1]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -393,56 +382,51 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper
 
 4. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets)::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-    testpmd>set burst 1
-    testpmd>set txpkts 64
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set txpkts 2000,2000,2000,2000
-    testpmd>start tx_first 1
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set txpkts 2000,2000,2000,2000
+    testpmd> start tx_first 1
+    testpmd> stop
 
 5. Start vhost, then quit pdump and three testpmd, about split virtqueue mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring.
 So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap.
 
-6. Quit and relaunch vhost with iova=pa by below command::
+6. Quit and relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.4;txq0@0000:00:04.5;rxq0@0000:00:04.6;rxq1@0000:00:04.7]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx
 
 7. Rerun step 2-5.
 
 Test Case 6: VM2VM packed ring non-mergeable path multi-queues payload check with cbdma enable
 ----------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring non-mergeable path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+packed ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 2 CBDMA devices to vfio-pci, launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-    --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1]
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;rxq0@0000:00:04.0;rxq1@0000:00:04.1]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;rxq0@0000:00:04.0;rxq1@0000:00:04.1]' \
+    --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
-    --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -450,35 +434,32 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 4. Launch virtio-user0 and send packets::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 5. Start vhost testpmd, check virtio-user1 RX-packets is 448 and RX-bytes is 28672, 448 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
 7. Quit and relaunch vhost with iova=pa by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-    --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1]
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.1]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;rxq0@0000:00:04.0]' \
+    --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -486,38 +467,36 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 7: VM2VM packed ring mergeable path multi-queues payload check with cbdma enable
 ------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring mergeable path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+packed ring mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 8 CBDMA devices to vfio-pci, launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-    --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0]
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
-    --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -525,34 +504,32 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 4. Launch virtio-user0 and send packets::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
 
 5. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-    --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0]
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.4;txq0@0000:00:04.5;rxq0@0000:00:04.6;rxq1@0000:00:04.7]' \
+    --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -560,39 +537,36 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 8: VM2VM packed ring inorder mergeable path multi-queues payload check with cbdma enable
 --------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 4 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
-    --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -600,35 +574,32 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
 
 5. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -636,39 +607,37 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 9: VM2VM packed ring inorder non-mergeable path multi-queues payload check with cbdma enable
 ------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder non-mergeable path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+packed ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 4 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq1;rxq1]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.1]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq1@0000:00:04.0;rxq1@0000:00:04.1]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -676,36 +645,32 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
-    --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 5. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq1;rxq1]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.3 -a 0000:00:04.4 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -713,37 +678,36 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 10: VM2VM packed ring vectorized-rx path multi-queues payload check with cbdma enable
 -----------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring vectorized-rx path
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user
+packed ring vectorized-rx path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 4 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq1;rxq1]' \
-    --iova=va -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq1@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -752,34 +716,31 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 4. Launch virtio-user0 and send 8k length packets::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096
-
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 5. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq1;rxq1]' \
-    --iova=pa -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --iova=pa -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -787,38 +748,36 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 11: VM2VM packed ring vectorized path multi-queues payload check test with ring size is not power of 2 with cbdma enable
 ----------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring vectorized path with ring size is not power of 2
-and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring vectorized path
+with ring size is not power of 2 and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 8 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;rxq0@0000:00:04.0;rxq1@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --force-max-simd-bitwidth=512  --no-pci --file-prefix=virtio1 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -827,34 +786,31 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 4. Launch virtio-user0 and send 8k length packets::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097
-
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 5. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap.
 
 6. Clear virtio-user1 port stats::
 
-    testpmd>stop
-    testpmd>clear port stats all
-    testpmd>start
+    testpmd> stop
+    testpmd> clear port stats all
+    testpmd> start
 
-7. Quit and relaunch vhost with iova=pa by below command::
+7. Quit and relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq1]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;rxq0@0000:00:04.0;rxq1@0000:00:04.1]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx
 
 8. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -862,38 +818,36 @@ and multi-queues when vhost uses the asynchronous operations with CBDMA channels
 
 9. Virtio-user0 send packets::
 
-    testpmd>set burst 32
-    testpmd>set txpkts 64
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set burst 1
-    testpmd>set txpkts 64,256,2000,64,256,2000
-    testpmd>start tx_first 27
-    testpmd>stop
+    testpmd> set burst 32
+    testpmd> set txpkts 64
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64,256,2000,64,256,2000
+    testpmd> start tx_first 27
+    testpmd> stop
 
 10. Rerun step 5.
 
 Test Case 12: VM2VM packed ring vectorized-tx path multi-queues test indirect descriptor and payload check with cbdma enable
 ----------------------------------------------------------------------------------------------------------------------------
 This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 4 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq1]' \
-    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[rxq0@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq1@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -902,55 +856,50 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous
 4. Launch virtio-user0 and send 8k length packets::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-
-    testpmd>set burst 1
-    testpmd>set txpkts 64
-    testpmd>start tx_first 27
-    testpmd>stop
-    testpmd>set burst 32
-    testpmd>start tx_first 7
-    testpmd>stop
-    testpmd>set txpkts 2000,2000,2000,2000
-    testpmd>start tx_first 1
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> set txpkts 64
+    testpmd> start tx_first 27
+    testpmd> stop
+    testpmd> set burst 32
+    testpmd> start tx_first 7
+    testpmd> stop
+    testpmd> set txpkts 2000,2000,2000,2000
+    testpmd> start tx_first 1
+    testpmd> stop
 
 5. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring.
 So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap.
 
-6. Quit and relaunch vhost with iova=pa by below command::
+6. Quit and relaunch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=2,client=1,dmas=[rxq0;rxq1]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \
-    --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq0@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx
 
 7. Rerun step 2-5.
 
 Test Case 13: VM2VM packed ring vectorized-tx path test batch processing with cbdma enable
 ------------------------------------------------------------------------------------------
-This case uses testpmd to test that one packet can forwarding in vhost-user/virtio-user packed ring vectorized-tx path
-when vhost uses the asynchronous operations with CBDMA channels.
+This case uses testpmd to test that one packet can forwarding in vhost-user/virtio-user
+packed ring vectorized-tx path when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Launch vhost by below command::
+1. Bind 1 CBDMA devices to vfio-pci, launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-    --vdev 'eth_vhost0,iface=/root/dpdk/vhost-net0,queues=1,client=1,dmas=[txq0;rxq0]' \
-	--vdev 'eth_vhost1,iface=/root/dpdk/vhost-net1,queues=1,client=1,dmas=[txq0;rxq0]' \
-    --iova=va -- -i --nb-cores=1 --txd=256 --rxd=256 --no-flush-rx \
-    --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7]
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \
+    --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+    --iova=va -- -i --nb-cores=1 --txd=256 --rxd=256 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \
-    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net1,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
-    testpmd>set fwd rxonly
-    testpmd>start
+    testpmd> set fwd rxonly
+    testpmd> start
 
 3. Attach pdump secondary process to primary process by same file-prefix::
 
@@ -959,10 +908,10 @@ when vhost uses the asynchronous operations with CBDMA channels.
 4. Launch virtio-user0 and send 1 packet::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/root/dpdk/vhost-net0,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
-    testpmd>set burst 1
-    testpmd>start tx_first 1
-    testpmd>stop
+    testpmd> set burst 1
+    testpmd> start tx_first 1
+    testpmd> stop
 
 5. Start vhost, then quit pdump and three testpmd, check 1 packet and 64 bytes received by virtio-user1 and 1 packet with 64 length in pdump-virtio-rx.pcap.
-- 
2.25.1


                 reply	other threads:[~2022-11-22  8:48 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221122084225.2897601-1-weix.ling@intel.com \
    --to=weix.ling@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).