From: Yinan <yinan.wang@intel.com>
To: dts@dpdk.org
Cc: Wang Yinan <yinan.wang@intel.com>
Subject: [dts] [PATCH v1] test_plans: update qemu cmd to compatible with qemu 4.2.0 for vm2vm_virtio_net_perf_test_plan
Date: Mon, 9 Mar 2020 17:43:17 +0000 [thread overview]
Message-ID: <20200309174317.63445-1-yinan.wang@intel.com> (raw)
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../vm2vm_virtio_net_perf_test_plan.rst | 491 +++++++++++-------
1 file changed, 290 insertions(+), 201 deletions(-)
diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 2db0339..f86075b 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -37,7 +37,11 @@ vm2vm vhost-user/virtio-net test plan
Description
===========
-This test plan test vhost tx offload (TSO and UFO) function by verifing the TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm split ring and packed ring vhost-user/virtio-net non-mergeable path. Also add case to check the payload of large packet is valid with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable dequeue zero copy test. For packed virtqueue test, need using qemu version > 4.2.0.
+This test plan test vhost tx offload (TSO and UFO) function by verifing the TSO/cksum in the TCP/IP
+stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm split ring
+and packed ring vhost-user/virtio-net mergeable path. Also add case to check the payload of large packet
+is valid with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable dequeue
+zero copy test. For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1.
Test flow
=========
@@ -50,41 +54,48 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
+ ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
- ifconfig ens3 1.1.1.2
+ ifconfig ens5 1.1.1.2
arp -s 1.1.1.8 52:54:00:00:00:02
4. On VM2, set virtio device IP and run arp protocal::
- ifconfig ens3 1.1.1.8
+ ifconfig ens5 1.1.1.8
arp -s 1.1.1.2 52:54:00:00:00:01
5. Check the iperf performance between two VMs by below commands::
Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
6. Check both 2VMs can receive and send big packets to each other::
@@ -92,7 +103,6 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
Port 0 should have tx packets above 1522
Port 1 should have rx packets above 1522
-7. Check iperf throughput can get expected data.
Test Case 2: VM2VM split ring vhost-user/virtio-net dequeue zero-copy test with tcp traffic
===========================================================================================
@@ -100,26 +110,33 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net dequeue zero-copy test with
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
+ ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
@@ -134,7 +151,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net dequeue zero-copy test with
5. Check the iperf performance between two VMs by below commands::
Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
6. Check both 2VMs can receive and send big packets to each other::
@@ -142,32 +159,39 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net dequeue zero-copy test with
Port 0 should have tx packets above 1522
Port 1 should have rx packets above 1522
-7. Check iperf throughput can get expected data.
-
Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic
=========================================================================
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
@@ -196,24 +220,33 @@ Test Case 4: Check split ring virtio-net device capability
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
@@ -235,26 +268,33 @@ Test Case 5: VM2VM virtio-net split ring mergeable zero copy test with large pac
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
@@ -276,26 +316,33 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable zero copy test with large
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
@@ -314,44 +361,51 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable zero copy test with large
Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
==========================================================================
-1. Launch the Vhost sample by below commands::
+1. Launch the Vhost sample by below commands::,packed=on
rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
+ ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
- ifconfig ens3 1.1.1.2
+ ifconfig ens5 1.1.1.2
arp -s 1.1.1.8 52:54:00:00:00:02
4. On VM2, set virtio device IP and run arp protocal::
- ifconfig ens3 1.1.1.8
+ ifconfig ens5 1.1.1.8
arp -s 1.1.1.2 52:54:00:00:00:01
5. Check the iperf performance between two VMs by below commands::
Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
6. Check both 2VMs can receive and send big packets to each other::
@@ -359,34 +413,39 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
Port 0 should have tx packets above 1522
Port 1 should have rx packets above 1522
-7. Check iperf throughput can get expected data.
-
Test Case 8: VM2VM packed ring vhost-user/virtio-net dequeue zero-copy test with tcp traffic
============================================================================================
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- ./testpmd -l 2-4 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
+ ./testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
@@ -401,7 +460,7 @@ Test Case 8: VM2VM packed ring vhost-user/virtio-net dequeue zero-copy test with
5. Check the iperf performance between two VMs by below commands::
Under VM1, run: `iperf -s -i 1`
- Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30`
+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
6. Check both 2VMs can receive and send big packets to each other::
@@ -409,32 +468,39 @@ Test Case 8: VM2VM packed ring vhost-user/virtio-net dequeue zero-copy test with
Port 0 should have tx packets above 1522
Port 1 should have rx packets above 1522
-7. Check iperf throughput can get expected data.
-
Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
==========================================================================
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
@@ -463,24 +529,33 @@ Test Case 10: Check packed ring virtio-net device capability
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ testpmd>./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2::
@@ -502,26 +577,33 @@ Test Case 11: VM2VM packed ring virtio-net mergeable dequeue zero copy test with
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,packed=on \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,packed=on \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
@@ -543,26 +625,33 @@ Test Case 12: VM2VM packed ring virtio-net non-mergeable dequeue zero copy test
1. Launch the Vhost sample by below commands::
rm -rf vhost-net*
- ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+ ./testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' \
+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>start
2. Launch VM1 and VM2::
- qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
- -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,packed=on \
- -vnc :12 -daemonize
-
- qemu-system-x86_64 -name us-vhost-vm2 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \
- -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \
- -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,packed=on \
- -vnc :11 -daemonize
+ qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+ -chardev socket,id=char0,path=./vhost-net0 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+
+ qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \
+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+ -chardev socket,id=char0,path=./vhost-net1 \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
3. On VM1, set virtio device IP and run arp protocal::
--
2.17.1
next reply other threads:[~2020-03-10 0:48 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-09 17:43 Yinan [this message]
2020-03-13 6:37 ` Tu, Lijuan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200309174317.63445-1-yinan.wang@intel.com \
--to=yinan.wang@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).