test suite reviews and discussions
 help / color / mirror / Atom feed
From: Wei Ling <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Wei Ling <weix.ling@intel.com>
Subject: [dts][PATCH V1 2/6] test_plans/vswitch_sample_cbdma_test_plan: modify testplan with new format
Date: Fri, 22 Apr 2022 13:48:38 +0800	[thread overview]
Message-ID: <20220422054838.1559225-1-weix.ling@intel.com> (raw)

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 28530 bytes --]

Modify testplan with new format.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 test_plans/vswitch_sample_cbdma_test_plan.rst | 294 ++++++++++++------
 1 file changed, 193 insertions(+), 101 deletions(-)

diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst
index af2e62d1..e6fabe32 100644
--- a/test_plans/vswitch_sample_cbdma_test_plan.rst
+++ b/test_plans/vswitch_sample_cbdma_test_plan.rst
@@ -37,68 +37,142 @@ Vswitch sample test with vhost async data path test plan
 Description
 ===========
 
-Vswitch sample can leverage IOAT to accelerate vhost async data-path from dpdk 20.11. This plan test
-vhost DMA operation callbacks for CBDMA PMD and vhost async data-path in vhost sample.
+Vswitch sample can leverage IOAT to accelerate vhost async data-path from dpdk 20.11.
+This plan test vhost DMA operation callbacks for CBDMA PMD and vhost async data-path in vhost sample.
 From 20.11 to 21.02, only split ring support cbdma copy with vhost enqueue direction;
 from 21.05,packed ring also can support cbdma copy with vhost enqueue direction.
 
+For more about dpdk-testpmd sample, please refer to the DPDK docments:
+https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html
+
+For virtio-user vdev parameter, you can refer to the DPDK docments:
+https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.
+
+For more about dpdk-vhost sample, please refer to the DPDK docments:
+https://doc.dpdk.org/guides/sample_app_ug/vhost.html
+
 Prerequisites
 =============
 
+Topology
+--------
+	Test flow: TG-->NIC-->VSwitch-->Virtio-->VSwitch-->NIC-->TG
+
+Hardware
+--------
+	Supportted NICs: ALL
+
+Software
+--------
+	Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
+
+General set up
+--------------
+1. Compile DPDK::
+
+	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
+	# ninja -C <dpdk build dir> -j 110
+
+2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
+
+	<dpdk dir># ./usertools/dpdk-devbind.py -s
+
+	Network devices using kernel driver
+	===================================
+	0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci
+
+	DMA devices using kernel driver
+	===============================
+	0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+	0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+
+Test case
+=========
+
+Common steps
+------------
+1. Bind 1 NIC port and CBDMA channels to vfio-pci::
+
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
+
+	For example, Bind 1 NIC port and 2 CBDMA channels::
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1
+
+2. Inject imix packets to NIC by traffic generator::
+
+	The packet size include [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows.
+	+-------------+-------------+-------------+-------------+
+	| MAC         | MAC         | IPV4        | IPV4        |
+	| Src address | Dst address | Src address | Dst address |
+	|-------------|-------------|-------------|-------------|
+	| Any MAC     | Virtio mac  | Any IP      | Any IP      |
+	+-------------+-------------+-------------+-------------+
+	All the packets in this test plan use the Virtio mac:00:11:22:33:44:10.
 
 Test Case1: PVP performance check with CBDMA channel using vhost async driver
-=============================================================================
+-----------------------------------------------------------------------------
+This case uses vhost, testpmd and Traffic Generator(For example, Trex) send imix packets to test performance with 1 CBDMA channel when using vhost async driver.
+Include packed ring vectorized path, packed ring size not power of 2 path and split ring vectorized path have been tested.
 
-1. Bind physical port to vfio-pci and CBDMA channel to vfio-pci.
+1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common step 1.
 
 2. On host, launch dpdk-vhost by below command::
 
-	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -- \
-	-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client --total-num-mbufs 600000
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \
+	--stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client --total-num-mbufs 600000
 
 3. Launch virtio-user with packed ring::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 \
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Start pkts from virtio-user side to let vswitch know the mac addr::
 
-	testpmd>set fwd mac
-	testpmd>start tx_first
+	testpmd> set fwd mac
+	testpmd> start tx_first
 
 5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_address (specific in above cmd with 00:11:22:33:44:10) to NIC using packet generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.
 
 6. Quit and re-launch virtio-user with packed ring size not power of 2::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1,queue_size=1025 -- -i --rxq=1 --txq=1 --txd=1025 --rxd=1025 --nb-cores=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1,queue_size=1025 \
+	-- -i --rxq=1 --txq=1 --txd=1025 --rxd=1025 --nb-cores=1
 
 7. Re-test step 4-5, record performance of different packet length.
 
 8. Quit and re-launch virtio-user with split ring::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,server=1 \
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 9. Re-test step 4-5, record performance of different packet length.
 
 Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
-=================================================================================
+--------------------------------------------------------------------------------
+This case uses vhost, testpmd and Traffic Generator(For example, Trex) send imix packets to test 2 virtio-user performance with 2 CBDMA channels when using vhost async driver.
+And also have tested relaunch vhost-user to send packets to get the performance.
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
+1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1.
 
 2. On host, launch dpdk-vhost by below command::
 
-	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- \
-	-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client--total-num-mbufs 600000
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \
+	--stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client--total-num-mbufs 600000
 
 3. launch two virtio-user ports::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
-	
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 \
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 \
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Start pkts from two virtio-user side individually to let vswitch know the mac addr::
 
@@ -107,36 +181,41 @@ Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
 	testpmd1>start tx_first
 	testpmd1>start tx_first
 
-5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator,record performance number can get expected from Packet generator rx side.
+5. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data.
 
 6. Stop dpdk-vhost side and relaunch it with same cmd as step2.
 
 7. Start pkts from two virtio-user side individually to let vswitch know the mac addr::
 
-    testpmd0>stop
-    testpmd0>start tx_first
-    testpmd1>stop
-    testpmd1>start tx_first
+	testpmd0>stop
+	testpmd0>start tx_first
+	testpmd1>stop
+	testpmd1>start tx_first
 
-8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator, ensure get same throughput as step5.
+8. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data.
 
 Test Case3: VM2VM forwarding test with two CBDMA channels
-=========================================================
+---------------------------------------------------------
+This case uses vhost, testpmd  to test virtio-user0 to virtio-user1 forwarding 64Byte/2000Byte/8000Byte packets by testpmd with 2 CBDMA channels.
+Virtio-user0 start with packed ring mergeable path and virtio-user1 start with split ring vectorized path.
+And also have tested relaunch vhost-user to send packets to get the performance.
 
-1.Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
+1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1.
 
 2. On host, launch dpdk-vhost by below command::
 
-	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \
 	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1]  --client --total-num-mbufs 600000
 
 3. Launch virtio-user::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 \
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 \
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Loop pkts between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected::
 
@@ -168,40 +247,45 @@ Test Case3: VM2VM forwarding test with two CBDMA channels
 6. Rerun step 4.
 
 Test Case4: VM2VM test with cbdma channels register/unregister stable check
-============================================================================
+---------------------------------------------------------------------------
+This case uses vhost, QEMU to test VM0 to VM1 forwarding 64Byte/2000Byte/8000Byte packets by testpmd with 2 CBDMA channels.
+2 VMs start with split ring mergeable path, and to test stable after re-bind PCI in VMs 50 times then forwarding
+64Byte/2000Byte/8000Byte packets by testpmd.
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
+1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1.
 
 2. On host, launch dpdk-vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \
     --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000
 
 3. Start VM0 with qemu-5.2.0::
 
  	qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \
-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-        -chardev socket,id=char0,path=/tmp/vhost-net0,server \
-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=/tmp/vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
 4. Start VM1 with qemu-5.2.0::
 
 	qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \
-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-        -chardev socket,id=char0,path=/tmp/vhost-net1,server \
-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=/tmp/vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
 5. Bind virtio port to vfio-pci in both two VMs::
 
@@ -212,7 +296,7 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check
 
 6. Start testpmd in VMs seperately::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024
 
 7. Loop pkts between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected::
 
@@ -248,40 +332,44 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check
 9. Restart vhost, then rerun step 7,check vhost can stable work and get expected throughput.
 
 Test Case5: VM2VM split ring test with iperf and reconnect stable check
-=======================================================================
+-----------------------------------------------------------------------
+This case uses vhost, QEMU to test VM0 to VM1 forwarding packets by iperf and scp tools with 2 CBDMA channels.
+2 VMs start with split ring non-mergeable path, and to test relaunch vhost-user stable.
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
+1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1.
 
 2. On host, launch dpdk-vhost by below command::
 
-	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \
 	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000
 
 3. Start VM0 with qemu-5.2.0::
 
  	qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \
-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-        -chardev socket,id=char0,path=/tmp/vhost-net0,server \
-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=/tmp/vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
 4. Start VM1 with qemu-5.2.0::
 
 	qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \
-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-        -chardev socket,id=char0,path=/tmp/vhost-net1,server \
-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=/tmp/vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
 5. On VM1, set virtio device IP and run arp protocal::
 
@@ -302,45 +390,49 @@ Test Case5: VM2VM split ring test with iperf and reconnect stable check
 
 9. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
 
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
 
 10. Relaunch vhost-dpdk, then rerun step 7-9 five times.
 
 Test Case6: VM2VM packed ring test with iperf and reconnect stable test
-=======================================================================
+-----------------------------------------------------------------------
+This case uses vhost, QEMU to test VM0 to VM1 forwarding packets by iperf and scp tools with 2 CBDMA channels.
+2 VMs start with packed ring non-mergeable path, and to test relaunch vhost-user stable.
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
+1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1.
 
 2. On host, launch dpdk-vhost by below command::
 
-	./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \
 	--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --total-num-mbufs 600000
 
 3. Start VM0 with qemu-5.2.0::
 
  	qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \
-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-        -chardev socket,id=char0,path=/tmp/vhost-net0 \
-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=/tmp/vhost-net0 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
 4. Start VM1 with qemu-5.2.0::
 
 	qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \
-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-        -chardev socket,id=char0,path=/tmp/vhost-net1 \
-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=/tmp/vhost-net1 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,\
+    csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
 5. On VM1, set virtio device IP and run arp protocal::
 
@@ -361,6 +453,6 @@ Test Case6: VM2VM packed ring test with iperf and reconnect stable test
 
 9. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
 
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
 
 10. Rerun step 7-9 five times.
-- 
2.25.1


                 reply	other threads:[~2022-04-22  5:48 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220422054838.1559225-1-weix.ling@intel.com \
    --to=weix.ling@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).