From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 269A6A04F3; Fri, 20 Dec 2019 03:46:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 026671BDFD; Fri, 20 Dec 2019 03:46:13 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id B76141B9BF for ; Fri, 20 Dec 2019 03:46:10 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Dec 2019 18:46:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,334,1571727600"; d="scan'208";a="222406504" Received: from npg-dpdk-project-yinanwang-1.sh.intel.com ([10.67.117.55]) by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2019 18:46:08 -0800 From: Yinan To: dts@dpdk.org Cc: Wang Yinan Date: Thu, 19 Dec 2019 19:41:30 +0000 Message-Id: <20191219194130.77500-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dts] [PATCH v1] test_plans: add packed virtqueue test in vm2vm topology X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" From: Wang Yinan Add packed virtqueue test in vm2vm_virtio_net_perf_test_plan.rst Signed-off-by: Wang Yinan --- .../vm2vm_virtio_net_perf_test_plan.rst | 285 +++++++++++++++--- 1 file changed, 242 insertions(+), 43 deletions(-) diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst index 96f0cc4..0fe8400 100644 --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst @@ -8,7 +8,7 @@ - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - - Redistributions in binary form must reproduce the above copyright + - Redistributions in binary forim must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. @@ -30,7 +30,6 @@ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - ===================================== vm2vm vhost-user/virtio-net test plan ===================================== @@ -38,15 +37,15 @@ vm2vm vhost-user/virtio-net test plan Description =========== -The feature enabled the DPDK Vhost TX offload (TSO and UFO). The feature added the negotiation between DPDK user space vhost and virtio-net, so we will verify the TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm vhost-user/virtio-net normal path. Also add case to check the payload of large packet is valid by scp with vm2vm vhost-user/virtio-net mergeable path test. +This test plan test vhost tx offload (TSO and UFO) function by verifing the TSO/cksum in the TCP/IP stack enabled environment and UFO/cksum in the UDP/IP stack enabled environment with vm2vm split ring and packed ring vhost-user/virtio-net non-mergeable path. Also add case to check the payload of large packet is valid with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable dequeue zero copy test. For packed virtqueue test, need using qemu version > 4.2.0. Test flow ========= Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net -Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic -============================================================== +Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic +========================================================================= 1. Launch the Vhost sample by below commands:: @@ -56,7 +55,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic 2. Launch VM1 and VM2:: - taskset -c 32-33 \ qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ @@ -65,7 +63,6 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ -vnc :12 -daemonize - taskset -c 34-35 \ qemu-system-x86_64 -name us-vhost-vm2 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ @@ -74,12 +71,12 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ -vnc :11 -daemonize -3. On VM1, set virtio device IP and run arp protocol:: +3. On VM1, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.2 arp -s 1.1.1.8 52:54:00:00:00:02 -4. On VM2, set virtio device IP and run arp protocol:: +4. On VM2, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 @@ -95,41 +92,206 @@ Test Case 1: VM2VM vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 2: VM2VM vhost-user/virtio-net zero-copy test with tcp traffic -======================================================================== +Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic +========================================================================= 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992 + testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM1 and VM2:: - taskset -c 32-33 \ qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ - -vnc :10 -daemonize + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize - taskset -c 34-35 \ qemu-system-x86_64 -name us-vhost-vm2 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize + +3. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +4. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +5. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -u -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000` + +6. Check both 2VMs can receive and send big packets to each other:: + + testpmd>show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 + +Test Case 3: Check split ring virtio-net device capability +========================================================== + +1. Launch the Vhost sample by below commands:: + + rm -rf vhost-net* + testpmd>./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +2. Launch VM1 and VM2,set TSO and UFO on in qemu command:: + + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize + + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ + -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize + +3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: + + Under VM1, run: `run ethtool -k ens3` + udp-fragmentation-offload: on + tx-tcp-segmentation: on + tx-tcp-ecn-segmentation: on + tx-tcp6-segmentation: on + + Under VM2, run: `run ethtool -k ens3` + udp-fragmentation-offload: on + tx-tcp-segmentation: on + tx-tcp-ecn-segmentation: on + tx-tcp6-segmentation: on + +Test Case 4: VM2VM virtio-net split ring mergeable zero copy test with large packet payload valid check +======================================================================================================= + +1. Launch the Vhost sample by below commands:: + + rm -rf vhost-net* + ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +2. Launch VM1 and VM2:: + + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \ + -vnc :12 -daemonize + + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ + -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on \ -vnc :11 -daemonize -3. On VM1, set virtio device IP and run arp protocol:: +3. On VM1, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.2 arp -s 1.1.1.8 52:54:00:00:00:02 -4. On VM2, set virtio device IP and run arp protocol:: +4. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +5. Scp 64KB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +Test Case 5: VM2VM virtio-net split ring non-mergeable zero copy test with large packet payload valid check +=========================================================================================================== + +1. Launch the Vhost sample by below commands:: + + rm -rf vhost-net* + ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +2. Launch VM1 and VM2:: + + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off \ + -vnc :12 -daemonize + + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ + -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off \ + -vnc :11 -daemonize + +3. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +4. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +5. Scp 64KB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +Test Case 6: VM2VM packed ring vhost-user/virtio-net test with tcp traffic +========================================================================== + +1. Launch the Vhost sample by below commands:: + + rm -rf vhost-net* + ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +2. Launch VM1 and VM2:: + + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \ + -vnc :12 -daemonize + + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ + -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \ + -vnc :11 -daemonize + +3. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +4. On VM2, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 @@ -139,14 +301,15 @@ Test Case 2: VM2VM vhost-user/virtio-net zero-copy test with tcp traffic Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30` -6. Check both 2VM can receive and send big packets to each other:: +6. Check both 2VMs can receive and send big packets to each other:: testpmd>show port xstats all Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic -============================================================== +Test Case 7: VM2VM packed ring vhost-user/virtio-net test with udp traffic +========================================================================== + 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* @@ -155,28 +318,26 @@ Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic 2. Launch VM1 and VM2:: - taskset -c 32-33 \ qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize - taskset -c 34-35 \ qemu-system-x86_64 -name us-vhost-vm2 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize -3. On VM1, set virtio device IP and run arp protocol:: +3. On VM1, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.2 arp -s 1.1.1.8 52:54:00:00:00:02 -4. On VM2, set virtio device IP and run arp protocol:: +4. On VM2, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 @@ -192,9 +353,8 @@ Test Case 3: VM2VM vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 - -Test Case 4: Check virtio-net device capability -=============================================== +Test Case 8: Check packed ring virtio-net device capability +=========================================================== 1. Launch the Vhost sample by below commands:: @@ -204,21 +364,19 @@ Test Case 4: Check virtio-net device capability 2. Launch VM1 and VM2,set TSO and UFO on in qemu command:: - taskset -c 32-33 \ qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 -daemonize + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 -daemonize - taskset -c 34-35 \ qemu-system-x86_64 -name us-vhost-vm2 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :11 -daemonize + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :11 -daemonize 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: @@ -234,13 +392,13 @@ Test Case 4: Check virtio-net device capability tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on -Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid check -=================================================================================== +Test Case 9: VM2VM packed ring virtio-net mergeable dequeue zero copy test with large packet payload valid check +================================================================================================================ 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM1 and VM2:: @@ -250,7 +408,7 @@ Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid ch -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,packed=on \ -vnc :12 -daemonize qemu-system-x86_64 -name us-vhost-vm2 \ @@ -258,19 +416,60 @@ Test Case 5: VM2VM vhost-user/virtio-net test with large packet payload valid ch -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,packed=on \ + -vnc :11 -daemonize + +3. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +4. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens3 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +5. Scp 64KB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +Test Case 10: VM2VM packed ring virtio-net non-mergeable dequeue zero copy test with large packet payload valid check +===================================================================================================================== + +1. Launch the Vhost sample by below commands:: + + rm -rf vhost-net* + ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dequeue-zero-copy=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1,dequeue-zero-copy=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>start + +2. Launch VM1 and VM2:: + + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-1.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,packed=on \ + -vnc :12 -daemonize + + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \ + -chardev socket,id=char1,path=./vhost-net1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=off,packed=on \ -vnc :11 -daemonize -3. On VM1, set virtio device IP and run arp protocol:: +3. On VM1, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.2 arp -s 1.1.1.8 52:54:00:00:00:02 -4. On VM2, set virtio device IP and run arp protocol:: +4. On VM2, set virtio device IP and run arp protocal:: ifconfig ens3 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 64KB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name \ No newline at end of file + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name -- 2.17.1