From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id A57D9A0471 for ; Mon, 17 Jun 2019 08:14:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9BF891BDEE; Mon, 17 Jun 2019 08:14:27 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 0CD421BDE8 for ; Mon, 17 Jun 2019 08:14:25 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Jun 2019 23:14:25 -0700 X-ExtLoop1: 1 Received: from npg-dpdk-project-yinanwang-1.sh.intel.com ([10.67.110.166]) by orsmga002.jf.intel.com with ESMTP; 16 Jun 2019 23:14:23 -0700 From: Yinan To: dts@dpdk.org Cc: Wang Yinan Date: Sun, 16 Jun 2019 23:11:32 +0000 Message-Id: <20190616231132.78536-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dts] [PATCH] test_plans: remove duplicate test plan of virtio X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" From: Wang Yinan Signed-off-by: Wang Yinan --- test_plans/pvp_qemu_zero_copy_test_plan.rst | 223 -------------------- 1 file changed, 223 deletions(-) delete mode 100644 test_plans/pvp_qemu_zero_copy_test_plan.rst diff --git a/test_plans/pvp_qemu_zero_copy_test_plan.rst b/test_plans/pvp_qemu_zero_copy_test_plan.rst deleted file mode 100644 index 5d03b08..0000000 --- a/test_plans/pvp_qemu_zero_copy_test_plan.rst +++ /dev/null @@ -1,223 +0,0 @@ -.. Copyright (c) <2019>, Intel Corporation - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED - OF THE POSSIBILITY OF SUCH DAMAGE. - -==================================== -vhost/virtio pvp zero-copy test plan -==================================== - -Description -=========== - -Vhost dequeue zero-copy is a performance optimization for vhost, so the test case is focused on the performance check. -As when packet size is 1518B, 10G nic could be the performance bottleneck, so we use 40G traffic genarator and 40G nic for pvp zero-copy test. -Highlight that vhost zero copy mbufs should be consumed as soon as possible, so don't start send packets at vhost side before VM and virtio-pmd launched. - -Test flow -========= - -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG - -Test Case 1: pvp zero-copy test with different packet sizes -=========================================================== - -1. Bind one 40G port to igb_uio, then launch testpmd by below command:: - - rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=1,dequeue-zero-copy=1' -- \ - -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992 - testpmd>set fwd mac - -2. Launch VM with mrg_rxbuf feature on, note that qemu_version need > qemu_2.10 for support adjusting parameter rx_queue_size:: - - qemu-system-x86_64 -name vm1 \ - -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 - -3. On VM, bind virtio net to igb_uio and run testpmd:: - - ./dpdk-devbind.py --bind=igb_uio xx:xx.x - ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -4. Start testpmd at host side after VM and virtio-pmd launched:: - - testpmd>start - -5. Send packets by packet generator with different packet sizes (64,128,256,512,1024,1518), show throughput with below command:: - - testpmd>show port stats all - -Test Case 2: pvp zero-copy test with 2 queues -============================================= - -1. Bind one 40G port to igb_uio, then launch testpmd by below command:: - - rm -rf vhost-net* - ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,dequeue-zero-copy=1' -- \ - -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 --txfreet=992 - testpmd>set fwd mac - -2. Launch VM with vectors=2*queue_num+2 and mrg_rxbuf/mq feature on, note that qemu_version need > qemu_2.10 for support adjusting parameter rx_queue_size:: - - qemu-system-x86_64 -name vm1 \ - -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=8,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 - -3. On VM, bind vdev to igb_uio and run testpmd:: - - ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x - ./testpmd -c 0x07 -n 4 -- -i \ - --rxq=2 --txq=2 --txd=1024 --rxd=1024 --nb-cores=2 - testpmd>set fwd mac - testpmd>start - -4. Start testpmd at host side after VM and virtio-pmd launched:: - - testpmd>start - -5. Send packets by packet generator with different packet sizes (64,128,256,512,1024,1518), show throughput with below command:: - - testpmd>show port stats all - -6. Check each queue's rx/tx packet numbers at vhost side:: - - testpmd>stop - -Test Case 3: pvp zero-copy test with driver unload test -======================================================= - -1. Bind one 40G port to igb_uio, then launch testpmd by below command:: - - rm -rf vhost-net* - ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-copy=1,client=1' -- \ - -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 --txfreet=992 - testpmd>set fwd mac - -2. Launch VM with vectors=2*queue_num+2 and mrg_rxbuf/mq feature on, note that qemu_version need > qemu_2.10 for support adjusting parameter rx_queue_size:: - - qemu-system-x86_64 -name vm1 \ - -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 - -3. On VM, bind virtio net to igb_uio and run testpmd:: - - ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x - ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 - testpmd>set fwd rxonly - testpmd>start - -4. Start testpmd at host side after VM launched:: - - testpmd>start - -5. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1518), show throughput with below command:: - - testpmd>show port stats all - -6. Relaunch testpmd at virtio side in VM for driver reloading:: - - testpmd>quit - ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -7. Send packets by packet generator with different packet sizes (64,128,256,512,1024,1518), show throughput with below command:: - - testpmd>show port stats all - -8. Check each queue's rx/tx packet numbers at vhost side:: - - testpmd>stop - -Test Case 4: pvp zero-copy test with maximum txfreet -==================================================== - -1. Bind one 40G port to igb_uio, then launch testpmd by below command:: - - rm -rf vhost-net* - ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-copy=1,client=1' -- \ - -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 --txfreet=1020 --txrs=4 - testpmd>set fwd mac - -2. Launch VM with vectors=2*queue_num+2 and mrg_rxbuf/mq feature on, note that qemu_version need > qemu_2.10 for support adjusting parameter rx_queue_size:: - - qemu-system-x86_64 -name vm1 \ - -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net,server \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 - -3. On VM, bind virtio net to igb_uio and run testpmd:: - - ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x - ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 - testpmd>set fwd mac - testpmd>start - -4. Start testpmd at host side after VM launched:: - - testpmd>start - -5. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1518), show throughput with below command:: - - testpmd>show port stats all - -6. Check each queue's rx/tx packet numbers at vhost side:: - - testpmd>stop -- 2.17.1