From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BBA7AA034F; Tue, 23 Feb 2021 07:40:01 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E2D64067A; Tue, 23 Feb 2021 07:40:01 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 52E7840041 for ; Tue, 23 Feb 2021 07:39:59 +0100 (CET) IronPort-SDR: /q78qv5yQ7fF7quW4kE+euNX2SZIo5oLIAiswIGKIQSZWc9OyuBGLsXoWx9ja45r/BPdNzkK6E VTtJ7TFiByag== X-IronPort-AV: E=McAfee;i="6000,8403,9903"; a="184002267" X-IronPort-AV: E=Sophos;i="5.81,199,1610438400"; d="scan'208";a="184002267" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2021 22:39:58 -0800 IronPort-SDR: BPwcAnCrvXCmkBtC7I1Lf58M8BMn6xQKdKXWJwdEkPgCB1IjeJTEG+7dGmeUi8cRZZSve/bLAb JjttHn+lmrYw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,199,1610438400"; d="scan'208";a="441641429" Received: from dpdk-yinan-ntb1.sh.intel.com ([10.67.119.39]) by orsmga001.jf.intel.com with ESMTP; 22 Feb 2021 22:39:49 -0800 From: Yinan Wang To: dts@dpdk.org Cc: Yinan Wang Date: Tue, 23 Feb 2021 10:26:12 -0500 Message-Id: <20210223152612.287817-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dts] [PATCH v1] test_plans/vm2vm_virtio_net_perf_test_plan.rst X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Add packed ring cbdma test cases. Signed-off-by: Yinan Wang --- .../vm2vm_virtio_net_perf_test_plan.rst | 228 +++++++++++++----- 1 file changed, 173 insertions(+), 55 deletions(-) diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst index f0107746..4085351a 100644 --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst @@ -1,4 +1,4 @@ -.. Copyright (c) <2020>, Intel Corporation +.. Copyright (c) <2021>, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -42,8 +42,7 @@ This test plan test several features in VM2VM topo: in the UDP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path. 2. Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable path. -3. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring. -4. Check function and performance of split ring enqueue operation with multi-CBDMA channels. +3. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost enqueue operation with multi-CBDMA channels. Note: For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1. Test flow @@ -65,7 +64,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -76,7 +75,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -120,7 +119,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -131,7 +130,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -161,7 +160,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -7. Check throughput and compare with case1, case2 performance should larger than case1. +7. Check throughput and compare with case1, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic ========================================================================= @@ -177,7 +176,7 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -188,7 +187,7 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -232,7 +231,7 @@ Test Case 4: Check split ring virtio-net device capability qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -243,7 +242,7 @@ Test Case 4: Check split ring virtio-net device capability qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -280,7 +279,7 @@ Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -291,7 +290,7 @@ Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -342,14 +341,22 @@ Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 testpmd>start -11. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: +11. On VM1, set virtio device:: + + ethtool -L ens5 combined 1 + +12. On VM2, set virtio device:: + + ethtool -L ens5 combined 1 + +13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name -12. Check the iperf performance, ensure queue0 can work from vhost side:: +14. Check the iperf performance, ensure queue0 can work from vhost side:: - Under VM1, run: `taskset -c 0 iperf -s -i 1` - Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check ======================================================================================================================== @@ -365,7 +372,7 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -376,7 +383,7 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -427,14 +434,22 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 testpmd>start -11. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: +11. On VM1, set virtio device:: + + ethtool -L ens5 combined 1 + +12. On VM2, set virtio device:: + + ethtool -L ens5 combined 1 + +13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name -12. Check the iperf performance, ensure queue0 can work from vhost side:: +14. Check the iperf performance, ensure queue0 can work from vhost side:: - Under VM1, run: `taskset -c 0 iperf -s -i 1` - Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic ========================================================================== @@ -450,7 +465,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -461,7 +476,62 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + +3. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +4. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +5. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +6. Check 2VMs can receive and send big packets to each other:: + + testpmd>show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 + +Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic +======================================================================================= + +1. Launch the Vhost sample by below commands:: + + rm -rf vhost-net* + ./dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1],dmathr=512' -- -i --nb-cores=2 --txd=1024 --rxd=1024 + testpmd>start + +2. Launch VM1 and VM2 on socket 1:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -491,7 +561,9 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 8: VM2VM packed ring vhost-user/virtio-net test with udp traffic +7. Check throughput and compare with case6, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. + +Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic ========================================================================== 1. Launch the Vhost sample by below commands:: @@ -516,7 +588,7 @@ Test Case 8: VM2VM packed ring vhost-user/virtio-net test with udp traffic qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -546,7 +618,7 @@ Test Case 8: VM2VM packed ring vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 9: Check packed ring virtio-net device capability +Test Case 10: Check packed ring virtio-net device capability ============================================================ 1. Launch the Vhost sample by below commands:: @@ -560,7 +632,7 @@ Test Case 9: Check packed ring virtio-net device capability qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -571,7 +643,7 @@ Test Case 9: Check packed ring virtio-net device capability qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -594,21 +666,21 @@ Test Case 9: Check packed ring virtio-net device capability tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on -Test Case 10: VM2VM virtio-net packed ring mergeable dynamic queues test with large packet payload valid check -============================================================================================================== +Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check +===================================================================================================================== 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 2. Launch VM1 and VM2 using qemu3.0:: taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -619,7 +691,7 @@ Test Case 10: VM2VM virtio-net packed ring mergeable dynamic queues test with la taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -649,36 +721,59 @@ Test Case 10: VM2VM virtio-net packed ring mergeable dynamic queues test with la Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` -7. Quit vhost ports and relaunch vhost ports with 1 queues:: +7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels:: - ./dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 8. Scp 1MB file form VM1 to VM2:: Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name -9. Check the iperf performance, ensure queue0 can work from vhost side:: +9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +10. Quit vhost ports and relaunch vhost ports with 1 queues:: + + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 + testpmd>start + +11. On VM1, set virtio device:: + + ethtool -L ens5 combined 1 + +12. On VM2, set virtio device:: + + ethtool -L ens5 combined 1 + +13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - Under VM1, run: `taskset -c 0 iperf -s -i 1` - Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +14. Check the iperf performance, ensure queue0 can work from vhost side:: -Test Case 11: VM2VM virtio-net packed ring non-mergeable dynamic queues test with large packet payload valid check -=================================================================================================================== + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check +========================================================================================================================= 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 2. Launch VM1 and VM2 using qemu3.0:: taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -689,7 +784,7 @@ Test Case 11: VM2VM virtio-net packed ring non-mergeable dynamic queues test wit taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -719,17 +814,40 @@ Test Case 11: VM2VM virtio-net packed ring non-mergeable dynamic queues test wit Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` -7. Quit vhost ports and relaunch vhost ports with 1 queues:: +7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels:: - ./dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 8. Scp 1MB file form VM1 to VM2:: Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name -9. Check the iperf performance, ensure queue0 can work from vhost side:: +9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +10. Quit vhost ports and relaunch vhost ports with 1 queues:: + + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 + testpmd>start + +11. On VM1, set virtio device:: + + ethtool -L ens5 combined 1 + +12. On VM2, set virtio device:: + + ethtool -L ens5 combined 1 + +13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +14. Check the iperf performance, ensure queue0 can work from vhost side:: - Under VM1, run: `taskset -c 0 iperf -s -i 1` - Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` \ No newline at end of file -- 2.25.1