* [dts] [PATCH] add 3 test plans for dpdk16.04
@ 2016-03-16 1:38 Qian Xu
2016-03-18 6:59 ` Liu, Yong
0 siblings, 1 reply; 3+ messages in thread
From: Qian Xu @ 2016-03-16 1:38 UTC (permalink / raw)
To: dts; +Cc: Qian Xu
The 3 test plans have been reviewed with developers.
Signed-off-by: Qian Xu <qian.q.xu@intel.com>
diff --git a/test_plans/veb_switch_test_plan.rst b/test_plans/veb_switch_test_plan.rst
new file mode 100644
index 0000000..4629c21
--- /dev/null
+++ b/test_plans/veb_switch_test_plan.rst
@@ -0,0 +1,268 @@
+.. Copyright (c) <2016>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+=====================================
+VEB Switch and floating VEB Test Plan
+=====================================
+
+VEB Switching Introduction
+==========================
+
+IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11/evb-tutorial-draft-20091116_v09.pdf
+
+Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a VLAN Bridge internal to Fortville that bridges the traffic of multiple VSIs over an internal virtual network.
+
+Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A VEPA multiplexes the traffic of one or more VSIs onto a single Fortville Ethernet port. The biggest difference between a VEB and a VEPA is that a VEB can switch packets internally between VSIs, whereas a VEPA cannot.
+
+Virtual Station Interface (VSI) - This is an IEEE EVB term that defines the properties of a virtual machine's (or a physical machine's) connection to the network. Each downstream v-port on a Fortville VEB or VEPA defines a VSI. A standards-based definition of VSI properties enables network management tools to perform virtual machine migration and associated network re-configuration in a vendor-neutral manner.
+
+My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC internal switch. It's similar as Niantic's SRIOV switch.
+
+Floating VEB Introduction
+=========================
+
+Floating VEB is based on VEB Switching. It will address 2 problems:
+
+Dependency on PF: When the physical port is link down, the functionality of the VEB/VEPA will not work normally. Even only data forwarding between the VF is required, one PF port will be wasted to create the related VEB.
+
+Ensure all the traffic from VF can only forwarding within the VFs connect to the floating VEB, cannot forward out of the NIC port.
+
+Prerequisites for VEB testing
+=============================
+
+1. Get the pci device id of DUT, for example::
+
+ ./dpdk_nic_bind.py --st
+
+ 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
+
+2.1 Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver, and set the VF MAC address at PF0::
+
+ echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
+ ./dpdk_nic_bind.py --st
+
+ 0000:81:02.0 'XL710/X710 Virtual Function' unused=
+ 0000:81:02.1 'XL710/X710 Virtual Function' unused=
+
+ ip link set ens259f0 vf 0 mac 00:11:22:33:44:11
+ ip link set ens259f0 vf 1 mac 00:11:22:33:44:12
+
+2.2 Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver.
+
+ ./dpdk_nic_bind.py -b igb_uio 81:00.0
+ echo 2 >/sys/bus/pci/devices/0000:81:00.0/max_vfs
+ ./dpdk_nic_bind.py --st
+
+3. Detach VFs from the host, bind them to pci-stub driver::
+
+ modprobe pci-stub
+
+ using `lspci -nn|grep -i ethernet` got VF device id, for example "8086 154c",
+
+ echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
+ echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind
+ echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind
+
+ echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
+ echo 0000:81:02.1 > /sys/bus/pci/devices/0000:08:02.1/driver/unbind
+ echo 0000:81:02.1 > /sys/bus/pci/drivers/pci-stub/bind
+
+4. Lauch the VM with the VF PCI passthrough.
+
+ taskset -c 18-19 qemu-system-x86_64 \
+ -mem-path /mnt/huge -mem-prealloc \
+ -enable-kvm -m 2048 -smp cores=2,sockets=1 -cpu host -name dpdk1-vm1 \
+ -device pci-assign,host=81:02.0 \
+ -drive file=/home/img/vm1.img \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:11:01 \
+ -localtime -vnc :22 -daemonize
+
+
+Test Case1: VEB Switching Inter-VM VF-VF MAC switch
+===================================================
+
+Summary: Kernel PF, then create 2VFs and 2VMs, assign one VF to one VM, say VF1 in VM1, VF2 in VM2. VFs in VMs are running dpdk testpmd, send traffic to VF1, and set the packet's DEST MAC to VF2, check if VF2 can receive the packets. Check Inter-VM VF-VF MAC switch.
+
+Details::
+
+1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
+2. In VM1, run testpmd::
+
+ ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
+ testpmd>set mac fwd
+ testpmd>set promisc off all
+ testpmd>start
+
+ In VM2, run testpmd::
+
+ ./testpmd -c 0x3 -n 4 -- -i
+ testpmd>set mac fwd
+ testpmd>set promisc off all
+ testpmd>start
+
+
+3. Send 100 packets to VF1's MAC address, check if VF2 can get 100 packets. Check the packet content is not corrupted.
+
+Test Case2: VEB Switching Inter-VM VF-VF MAC/VLAN switch
+========================================================
+
+Summary: Kernel PF, then create 2VFs and 2VMs, assign VF1 with VLAN=1 in VM1, VF2 with VLAN=2 in VM2. VFs in VMs are running dpdk testpmd, send traffic to VF1 with VLAN=1, then let it forwards to VF2, it should not work since they are not in the same VLAN; set VF2 with VLAN=1, then send traffic to VF1 with VLAN=1, and VF2 can receive the packets. Check inter-VM VF-VF MAC/VLAN switch.
+
+Details:
+
+1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
+
+2. Set the VLAN id of VF1 and VF2::
+
+ ip link set ens259f0 vf 0 vlan 1
+ ip link set ens259f0 vf 1 vlan 2
+
+3. In VM1, run testpmd::
+
+ ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
+ testpmd>set mac fwd
+ testpmd>set promisc all off
+ testpmd>start
+
+ In VM2, run testpmd::
+
+ ./testpmd -c 0x3 -n 4 -- -i
+ testpmd>set mac fwd
+ testpmd>set promisc all off
+ testpmd>start
+
+
+4. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2 can't get 100 packets since they are not in the same VLAN.
+
+5. Change the VLAN id of VF2::
+
+ ip link set ens259f0 vf 1 vlan 1
+
+6. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2 can get 100 packets since they are in the same VLAN now. Check the packet content is not corrupted.
+
+Test Case3: VEB Switching Inter-VM PF-VF MAC switch
+===================================================
+
+Summary: DPDK PF, then create 1VF, assign VF1 to VM1, PF in the host running dpdk traffic, send traffic from PF to VF1, ensure PF->VF1(let VF1 in promisc mode); send traffic from VF1 to PF, ensure VF1->PF can work.
+
+Details:
+
+1. Start VM1 with VF1, see the prerequisite part.
+
+3. In host, launch testpmd::
+
+ ./testpmd -c 0xc0000 -n 4 -- -i
+ testpmd>set mac fwd
+ testpmd>set promisc all on
+ testpmd>start
+
+ In VM1, run testpmd::
+
+ ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr (Note: this will let VF1 forwards packets to PF)
+ testpmd>set mac fwd
+ testpmd>set promisc all on
+ testpmd>start
+
+4. Send 100 packets with VF1's MAC address, check if PF can get 100 packets, so VF1->PF is working. Check the packet content is not corrupted.
+
+5. Remove "--eth-peer" in VM1 testpmd commands, then send 100 packets with PF's MAC address, check if VF1 can get 100 packets, so PF->VF1 is working. Check the packet content is not corrupted.
+
+
+Test Case4: VEB Switching Inter-VM PF-VF/VF-VF MAC switch Performance
+=====================================================================
+
+Performance testing, repeat Testcase1(VF-VF) and Testcase3(PF-VF) to check the performance at different sizes(64B--1518B and jumbo frame--3000B) with 100% rate sending traffic.
+
+Test Case5: Floating VEB Inter-VM VF-VF
+=======================================
+
+Summary: DPDK PF, then create 2VFs and 2VMs, assign one VF to one VM, say VF1 in VM1, VF2 in VM2, and make PF link down(the cable can be pluged out). VFs in VMs are running dpdk testpmd, send traffic to VF1, and set the packet's DEST MAC to VF2, check if VF2 can receive the packets. Check Inter-VM VF-VF MAC switch when PF is link down as well as up.
+
+Details:
+
+1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
+2. In the host, run testpmd with floating parameters and make the link down::
+
+ ./testpmc -c 0xc0000 -n 4 --floating -- -i
+ testpmd> port stop all
+ testpmd> show port info all
+
+3. In VM1, run testpmd::
+
+ ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
+ testpmd>set mac fwd
+ testpmd>set promisc off all
+ testpmd>start
+
+ In VM2, run testpmd::
+
+ ./testpmd -c 0x3 -n 4 -- -i
+ testpmd>set mac fwd
+ testpmd>set promisc off all
+ testpmd>start
+
+
+4. Send 100 packets to VF1's MAC address, check if VF2 can get 100 packets. Check the packet content is not corrupted. Also check the PF's port stats, and there should be no packets RX/TX at PF port.
+
+5. In the host, run testpmd with floating parameters and keep the link up, then do step3 and step4, PF should have no RX/TX packets even when link is up::
+
+ ./testpmc -c 0xc0000 -n 4 --floating -- -i
+ testpmd> port start all
+ testpmd> show port info all
+
+
+Test Case6: Floating VEB Inter-VM VF traffic can't be out of NIC
+================================================================
+
+DPDK PF, then create 1VF, assign VF1 to VM1, send traffic from VF1 to outside world, then check outside world will not see any traffic.
+
+Details:
+
+1. Start VM1 with VF1, see the prerequisite part.
+2. In the host, run testpmd with floating parameters.
+
+ ./testpmc -c 0xc0000 -n 4 --floating -- -i
+
+3. In VM1, run testpmd, ::
+
+ ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr
+ testpmd>set fwd txonly
+ testpmd>start
+
+
+4. At PF side, check the port stats to see if there is any RX/TX packets, and also check the traffic generator side(e.g: IXIA ports or another port connected to the DUT port) to ensure no packets.
+
+
+Test Case7: Floating VEB VF-VF Performance
+==========================================
+
+Testing VF-VF performance at different sizes(64B--1518B and jumbo frame--3000B) with 100% rate sending traffic.
\ No newline at end of file
diff --git a/test_plans/vhost_tso_test_plan.rst b/test_plans/vhost_tso_test_plan.rst
new file mode 100644
index 0000000..f2b46e7
--- /dev/null
+++ b/test_plans/vhost_tso_test_plan.rst
@@ -0,0 +1,130 @@
+.. Copyright (c) <2015>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+===================
+Vhost TSO Test Plan
+===================
+
+The feature enabled the DPDK Vhost TX offload(checksum and TSO), so that it will let the NIC to do the TX offload, and it can improve performance. The feature added the negotiation between DPDK user space vhost and virtio-net, so we will verify the DPDK Vhost user + virtio-net for the TSO/cksum in the TCP/IP stack enabled environment. DPDK vhost + virtio-pmd will not be covered by this plan since virtio-pmd doesn't have TCP/IP stack and virtio TSO is not enabled, so it will not be tested.
+
+In the test plan, we will use vhost switch sample to test.
+When testing vm2vm case, we will only test vm2vm=1(software switch), not test vm2vm=2(hardware switch).
+
+Prerequisites:
+==============
+
+Install iperf on both host and guests.
+
+
+Test Case1: DPDK vhost user + virtio-net one VM fwd tso
+=======================================================
+
+HW preparation: Connect 2 ports directly. In our case, connect 81:00.0(port1) and 81:00.1(port2) two ports directly. Port1 is binded to igb_uio for vhost-sample to use, while port2 is in kernel driver.
+
+SW preparation: Change one line of the vhost sample and rebuild::
+
+ #In function virtio_tx_route(xxx)
+ m->vlan_tci = vlan_tag;
+ #changed to
+ m->vlan_tci = 1000;
+
+1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1".::
+
+ taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0 --tso 1 --csum 1
+
+2. Launch VM1::
+
+ taskset -c 21-22 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \
+ -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic
+
+3. On host,configure port2, then you can see there is a interface called ens260f1.1000.::
+
+ ifconfig ens260f1
+ vconfig add ens260f1 1000
+ ifconfig ens260f1.1000 1.1.1.8
+
+4. On the VM1, set the virtio IP and run iperf::
+
+ ifconfig ethX 1.1.1.2
+ ping 1.1.1.8 # let virtio and port2 can ping each other successfully, then the arp table will be set up automatically.
+
+5. In host, run : `iperf –s –i 1` ; In guest, run `iperf –c 1.1.1.2 –i 1 -t 60`, check if there is 64K (size: 65160) packet. If there is 64K packet, then TSO is enabled, or else TSO is disabled.
+
+6. On the VM1, run `tcpdump -i ethX -n -e -vv` to check if the cksum is correct. You should not see incorrect cksum output.
+
+Test Case2: DPDK vhost user + virtio-net VM2VM=1 fwd tso
+========================================================
+
+1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1 --vm2vm 1".::
+
+ taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 1 --tso 1 --csum 1
+
+2. Launch VM1 and VM2. ::
+
+ taskset -c 21-22 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \
+ -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic
+
+ taskset -c 23-24 \
+ qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
+ -chardev socket,id=char1,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
+ -netdev tap,id=ipvm1,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:02 -nographic
+
+3. On VM1, set the virtio IP and run iperf
+
+ ifconfig ethX 1.1.1.2
+ arp -s 1.1.1.8 52:54:00:00:00:02
+ arp # to check the arp table is complete and correct.
+
+4. On VM2, set the virtio IP and run iperf
+
+ ifconfig ethX 1.1.1.8
+ arp -s 1.1.1.2 52:54:00:00:00:01
+ arp # to check the arp table is complete and correct.
+
+5. Ensure virtio1 can ping virtio2. Then in VM1, run : `iperf –s –i 1` ; In VM2, run `iperf –c 1.1.1.2 –i 1 -t 60`, check if there is 64K (size: 65160) packet. If there is 64K packet, then TSO is enabled, or else TSO is disabled.
+
+6. On the VM1, run `tcpdump -i ethX -n -e -vv`.
+
+
\ No newline at end of file
diff --git a/test_plans/virtio_1.0_test_plan.rst b/test_plans/virtio_1.0_test_plan.rst
index 8412eac..727991b 100644
--- a/test_plans/virtio_1.0_test_plan.rst
+++ b/test_plans/virtio_1.0_test_plan.rst
@@ -1,261 +1,261 @@
-.. Copyright (c) <2015>, Intel Corporation
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
-
- - Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
-
- - Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
-
- - Neither the name of Intel Corporation nor the names of its
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
- FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
- COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
- INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
- SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
- STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
- OF THE POSSIBILITY OF SUCH DAMAGE.
-
-=================================
-Virtio-1.0 Support Test Plan
-=================================
-
-Virtio 1.0 is a new version of virtio. And the virtio 1.0 spec link is at http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.pdf. The major difference is at PCI layout. For testing virtio 1.0 pmd, we need test the basic RX/TX, different path(txqflags), mergeable on/off, and also test with virtio0.95 to ensure they can co-exist. Besides, we need test virtio 1.0's performance to ensure it has similar performance as virtio0.95.
-
-
-Test Case1: test_func_vhost_user_virtio1.0-pmd with different txqflags
-======================================================================
-
-Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
-
-1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-
- taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
-
-2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
-
- taskset -c 22-23 \
- /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
- -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
- -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
-
-
-3. In the VM, change the config file--common_linuxapp, "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
-
- ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
-
- ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
-
- $ >set fwd mac
-
- $ >start tx_first
-
- We expect similar output as below, and see modern virtio pci detected.
-
- PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
- PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
- PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4194304
- PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
- PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
- PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
- PMD: virtio_read_caps(): found modern virtio pci device.
- PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
- PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
- PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
- PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off multiplier: 409 6
- PMD: vtpci_init(): modern virtio pci detected.
-
-
-4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size.
-
-5. Also run the dpdk testpmd in VM with txqflags=0xf01 for the virtio pmd optimization usage::
-
- ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
-
- ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags=0x0f01 --disable-hw-vlan
-
- $ >set fwd mac
-
- $ >start tx_first
-
-6. Send traffic to virtio1(MAC1=52:54:00:00:00:01) and VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size. Check the packet content is correct.
-
-Test Case2: test_func_vhost_user_virtio1.0-pmd for packet sequence check
-========================================================================
-
-Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
-
-1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-
- taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
-
-2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
-
- taskset -c 22-23 \
- /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
- -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
- -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
-
-
-3. In the VM, change the config file--common_linuxapp, "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
-
- ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
-
- ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
-
- $ >set fwd mac
-
- $ >start tx_first
-
- We expect similar output as below, and see modern virtio pci detected.
-
- PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
- PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
- PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4194304
- PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
- PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
- PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
- PMD: virtio_read_caps(): found modern virtio pci device.
- PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
- PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
- PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
- PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off multiplier: 409 6
- PMD: vtpci_init(): modern virtio pci detected.
-
-
-4. Send 100 packets at rate 25% at small packet(e.g: 70B) to the virtio with VLAN=1000, and insert the sequence number at byte offset 44 bytes. Make the sequence number starting from 00 00 00 00 and the step 1, first ensure no packet loss at IXIA, then check if the received packets have the same order as sending side.If out of order, then it's an issue.
-
-
-Test Case3: test_func_vhost_user_virtio1.0-pmd with mergeable enabled
-=====================================================================
-
-1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-
- taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0
-
-2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
-
- taskset -c 22-23 \
- /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
- -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
- -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
-
-
-3. Run dpdk testpmd in VM::
-
- ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
-
- ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --max-pkt-len=9000
-
- $ >set fwd mac
-
- $ >start tx_first
-
-4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size. Check packet size(64-1518) as well as the jumbo frame(3000,9000) can be RX/TX.
-
-
-Test Case4: test_func_vhost_user_one-vm-virtio1.0-one-vm-virtio0.95
-===================================================================
-
-1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-
- taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 1
-
-2. Start VM1 with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
-
- taskset -c 22-23 \
- /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
- -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
- -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
-
-3. Start VM2 with 1 virtio, note:
-
- taskset -c 24-25 \
- /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/img/vm2.img \
- -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-modern=true \
- -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm2,id=net1,mac=00:00:00:00:10:02 -nographic
-
-3. Run dpdk testpmd in VM1 and VM2::
-
- VM1:
-
- ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
-
- ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --eth-peer=0,52:54:00:00:00:02
-
- $ >set fwd mac
-
- $ >start tx_first
-
- VM2:
-
- ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
-
- ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
-
- $ >set fwd mac
-
- $ >start tx_first
-
-4. Send 100 packets at low rate to virtio1, and the expected flow is ixia-->NIC-->VHOST-->Virtio1-->Virtio2-->Vhost-->NIC->ixia port. Check the packet back at ixia port is content correct, no size change and payload change.
-
-Test Case5: test_perf_vhost_user_one-vm-virtio1.0-pmd
-=====================================================
-
-Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
-
-1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
-
- taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
-
-2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
-
- taskset -c 22-23 \
- /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
- -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
- -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
- -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
- -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
- -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
-
-
-3. In the VM, run dpdk testpmd in VM::
-
- ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
-
- ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
-
- $ >set fwd mac
-
- $ >start tx_first
-
+.. Copyright (c) <2016>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+=================================
+Virtio-1.0 Support Test Plan
+=================================
+
+Virtio 1.0 is a new version of virtio. And the virtio 1.0 spec link is at http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.pdf. The major difference is at PCI layout. For testing virtio 1.0 pmd, we need test the basic RX/TX, different path(txqflags), mergeable on/off, and also test with virtio0.95 to ensure they can co-exist. Besides, we need test virtio 1.0's performance to ensure it has similar performance as virtio0.95.
+
+
+Test Case1: test_func_vhost_user_virtio1.0-pmd with different txqflags
+======================================================================
+
+Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
+
+1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
+
+ taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
+
+2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
+
+ taskset -c 22-23 \
+ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
+ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
+
+
+3. In the VM, change the config file--common_linuxapp, "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
+
+ ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
+
+ ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
+
+ $ >set fwd mac
+
+ $ >start tx_first
+
+ We expect similar output as below, and see modern virtio pci detected.
+
+ PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
+ PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
+ PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4194304
+ PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
+ PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
+ PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
+ PMD: virtio_read_caps(): found modern virtio pci device.
+ PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
+ PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
+ PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
+ PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off multiplier: 409 6
+ PMD: vtpci_init(): modern virtio pci detected.
+
+
+4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size.
+
+5. Also run the dpdk testpmd in VM with txqflags=0xf01 for the virtio pmd optimization usage::
+
+ ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
+
+ ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags=0x0f01 --disable-hw-vlan
+
+ $ >set fwd mac
+
+ $ >start tx_first
+
+6. Send traffic to virtio1(MAC1=52:54:00:00:00:01) and VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size. Check the packet content is correct.
+
+Test Case2: test_func_vhost_user_virtio1.0-pmd for packet sequence check
+========================================================================
+
+Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
+
+1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
+
+ taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
+
+2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
+
+ taskset -c 22-23 \
+ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
+ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
+
+
+3. In the VM, change the config file--common_linuxapp, "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
+
+ ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
+
+ ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
+
+ $ >set fwd mac
+
+ $ >start tx_first
+
+ We expect similar output as below, and see modern virtio pci detected.
+
+ PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
+ PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
+ PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4194304
+ PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
+ PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
+ PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
+ PMD: virtio_read_caps(): found modern virtio pci device.
+ PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
+ PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
+ PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
+ PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off multiplier: 409 6
+ PMD: vtpci_init(): modern virtio pci detected.
+
+
+4. Send 100 packets at rate 25% at small packet(e.g: 70B) to the virtio with VLAN=1000, and insert the sequence number at byte offset 44 bytes. Make the sequence number starting from 00 00 00 00 and the step 1, first ensure no packet loss at IXIA, then check if the received packets have the same order as sending side.If out of order, then it's an issue.
+
+
+Test Case3: test_func_vhost_user_virtio1.0-pmd with mergeable enabled
+=====================================================================
+
+1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
+
+ taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0
+
+2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
+
+ taskset -c 22-23 \
+ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
+ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
+
+
+3. Run dpdk testpmd in VM::
+
+ ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
+
+ ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --max-pkt-len=9000
+
+ $ >set fwd mac
+
+ $ >start tx_first
+
+4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size. Check packet size(64-1518) as well as the jumbo frame(3000,9000) can be RX/TX.
+
+
+Test Case4: test_func_vhost_user_one-vm-virtio1.0-one-vm-virtio0.95
+===================================================================
+
+1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
+
+ taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 1
+
+2. Start VM1 with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
+
+ taskset -c 22-23 \
+ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
+ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
+
+3. Start VM2 with 1 virtio, note:
+
+ taskset -c 24-25 \
+ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/vm2.img \
+ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-modern=true \
+ -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm2,id=net1,mac=00:00:00:00:10:02 -nographic
+
+3. Run dpdk testpmd in VM1 and VM2::
+
+ VM1:
+
+ ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
+
+ ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --eth-peer=0,52:54:00:00:00:02
+
+ $ >set fwd mac
+
+ $ >start tx_first
+
+ VM2:
+
+ ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
+
+ ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
+
+ $ >set fwd mac
+
+ $ >start tx_first
+
+4. Send 100 packets at low rate to virtio1, and the expected flow is ixia-->NIC-->VHOST-->Virtio1-->Virtio2-->Vhost-->NIC->ixia port. Check the packet back at ixia port is content correct, no size change and payload change.
+
+Test Case5: test_perf_vhost_user_one-vm-virtio1.0-pmd
+=====================================================
+
+Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
+
+1. Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.::
+
+ taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
+
+2. Start VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0.
+
+ taskset -c 22-23 \
+ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \
+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
+ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
+ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \
+ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
+
+
+3. In the VM, run dpdk testpmd in VM::
+
+ ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
+
+ ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
+
+ $ >set fwd mac
+
+ $ >start tx_first
+
4. Send traffic at line rate to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check the performance at different packet size(68,128,256,512,1024,1280,1518) and record it as the performance data. The result should be similar as virtio0.95.
\ No newline at end of file
--
2.1.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dts] [PATCH] add 3 test plans for dpdk16.04
2016-03-16 1:38 [dts] [PATCH] add 3 test plans for dpdk16.04 Qian Xu
@ 2016-03-18 6:59 ` Liu, Yong
2016-03-23 1:47 ` Xu, Qian Q
0 siblings, 1 reply; 3+ messages in thread
From: Liu, Yong @ 2016-03-18 6:59 UTC (permalink / raw)
To: Xu, Qian Q, dts; +Cc: Xu, Qian Q
Hi Qian,
Please separated this patch for those three test plans have no relations to each other.
> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Qian Xu
> Sent: Wednesday, March 16, 2016 9:38 AM
> To: dts@dpdk.org
> Cc: Xu, Qian Q
> Subject: [dts] [PATCH] add 3 test plans for dpdk16.04
>
> The 3 test plans have been reviewed with developers.
>
> Signed-off-by: Qian Xu <qian.q.xu@intel.com>
>
> diff --git a/test_plans/veb_switch_test_plan.rst
> b/test_plans/veb_switch_test_plan.rst
> new file mode 100644
> index 0000000..4629c21
> --- /dev/null
> +++ b/test_plans/veb_switch_test_plan.rst
> @@ -0,0 +1,268 @@
> +.. Copyright (c) <2016>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +=====================================
> +VEB Switch and floating VEB Test Plan
> +=====================================
> +
> +VEB Switching Introduction
> +==========================
> +
> +IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11/evb-
> tutorial-draft-20091116_v09.pdf
> +
> +Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a
> VLAN Bridge internal to Fortville that bridges the traffic of multiple
> VSIs over an internal virtual network.
> +
> +Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A
> VEPA multiplexes the traffic of one or more VSIs onto a single Fortville
> Ethernet port. The biggest difference between a VEB and a VEPA is that a
> VEB can switch packets internally between VSIs, whereas a VEPA cannot.
> +
> +Virtual Station Interface (VSI) - This is an IEEE EVB term that defines
> the properties of a virtual machine's (or a physical machine's)
> connection to the network. Each downstream v-port on a Fortville VEB or
> VEPA defines a VSI. A standards-based definition of VSI properties
> enables network management tools to perform virtual machine migration and
> associated network re-configuration in a vendor-neutral manner.
> +
> +My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it
> can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC
> internal switch. It's similar as Niantic's SRIOV switch.
> +
> +Floating VEB Introduction
> +=========================
> +
> +Floating VEB is based on VEB Switching. It will address 2 problems:
> +
> +Dependency on PF: When the physical port is link down, the functionality
> of the VEB/VEPA will not work normally. Even only data forwarding between
> the VF is required, one PF port will be wasted to create the related VEB.
> +
> +Ensure all the traffic from VF can only forwarding within the VFs
> connect to the floating VEB, cannot forward out of the NIC port.
> +
> +Prerequisites for VEB testing
> +=============================
> +
> +1. Get the pci device id of DUT, for example::
> +
> + ./dpdk_nic_bind.py --st
> +
> + 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0
> drv=i40e unused=
> +
> +2.1 Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver,
> and set the VF MAC address at PF0::
> +
> + echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
> + ./dpdk_nic_bind.py --st
> +
> + 0000:81:02.0 'XL710/X710 Virtual Function' unused=
> + 0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> + ip link set ens259f0 vf 0 mac 00:11:22:33:44:11
> + ip link set ens259f0 vf 1 mac 00:11:22:33:44:12
> +
> +2.2 Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver.
> +
> + ./dpdk_nic_bind.py -b igb_uio 81:00.0
> + echo 2 >/sys/bus/pci/devices/0000:81:00.0/max_vfs
> + ./dpdk_nic_bind.py --st
> +
> +3. Detach VFs from the host, bind them to pci-stub driver::
> +
> + modprobe pci-stub
> +
> + using `lspci -nn|grep -i ethernet` got VF device id, for example
> "8086 154c",
> +
> + echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
> + echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind
> + echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind
> +
> + echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
> + echo 0000:81:02.1 > /sys/bus/pci/devices/0000:08:02.1/driver/unbind
> + echo 0000:81:02.1 > /sys/bus/pci/drivers/pci-stub/bind
> +
> +4. Lauch the VM with the VF PCI passthrough.
> +
> + taskset -c 18-19 qemu-system-x86_64 \
> + -mem-path /mnt/huge -mem-prealloc \
> + -enable-kvm -m 2048 -smp cores=2,sockets=1 -cpu host -name dpdk1-
> vm1 \
> + -device pci-assign,host=81:02.0 \
> + -drive file=/home/img/vm1.img \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:11:01 \
> + -localtime -vnc :22 -daemonize
> +
> +
> +Test Case1: VEB Switching Inter-VM VF-VF MAC switch
> +===================================================
> +
> +Summary: Kernel PF, then create 2VFs and 2VMs, assign one VF to one VM,
> say VF1 in VM1, VF2 in VM2. VFs in VMs are running dpdk testpmd, send
> traffic to VF1, and set the packet's DEST MAC to VF2, check if VF2 can
> receive the packets. Check Inter-VM VF-VF MAC switch.
> +
> +Details::
> +
> +1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
> +2. In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> + In VM2, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> +
> +3. Send 100 packets to VF1's MAC address, check if VF2 can get 100
> packets. Check the packet content is not corrupted.
> +
> +Test Case2: VEB Switching Inter-VM VF-VF MAC/VLAN switch
> +========================================================
> +
> +Summary: Kernel PF, then create 2VFs and 2VMs, assign VF1 with VLAN=1 in
> VM1, VF2 with VLAN=2 in VM2. VFs in VMs are running dpdk testpmd, send
> traffic to VF1 with VLAN=1, then let it forwards to VF2, it should not
> work since they are not in the same VLAN; set VF2 with VLAN=1, then send
> traffic to VF1 with VLAN=1, and VF2 can receive the packets. Check inter-
> VM VF-VF MAC/VLAN switch.
> +
> +Details:
> +
> +1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
> +
> +2. Set the VLAN id of VF1 and VF2::
> +
> + ip link set ens259f0 vf 0 vlan 1
> + ip link set ens259f0 vf 1 vlan 2
> +
> +3. In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
> + testpmd>set mac fwd
> + testpmd>set promisc all off
> + testpmd>start
> +
> + In VM2, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc all off
> + testpmd>start
> +
> +
> +4. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2
> can't get 100 packets since they are not in the same VLAN.
> +
> +5. Change the VLAN id of VF2::
> +
> + ip link set ens259f0 vf 1 vlan 1
> +
> +6. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2 can
> get 100 packets since they are in the same VLAN now. Check the packet
> content is not corrupted.
> +
> +Test Case3: VEB Switching Inter-VM PF-VF MAC switch
> +===================================================
> +
> +Summary: DPDK PF, then create 1VF, assign VF1 to VM1, PF in the host
> running dpdk traffic, send traffic from PF to VF1, ensure PF->VF1(let VF1
> in promisc mode); send traffic from VF1 to PF, ensure VF1->PF can work.
> +
> +Details:
> +
> +1. Start VM1 with VF1, see the prerequisite part.
> +
> +3. In host, launch testpmd::
> +
> + ./testpmd -c 0xc0000 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc all on
> + testpmd>start
> +
> + In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr (Note: this will
> let VF1 forwards packets to PF)
> + testpmd>set mac fwd
> + testpmd>set promisc all on
> + testpmd>start
> +
> +4. Send 100 packets with VF1's MAC address, check if PF can get 100
> packets, so VF1->PF is working. Check the packet content is not corrupted.
> +
> +5. Remove "--eth-peer" in VM1 testpmd commands, then send 100 packets
> with PF's MAC address, check if VF1 can get 100 packets, so PF->VF1 is
> working. Check the packet content is not corrupted.
> +
> +
> +Test Case4: VEB Switching Inter-VM PF-VF/VF-VF MAC switch Performance
> +=====================================================================
> +
> +Performance testing, repeat Testcase1(VF-VF) and Testcase3(PF-VF) to
> check the performance at different sizes(64B--1518B and jumbo frame--
> 3000B) with 100% rate sending traffic.
> +
> +Test Case5: Floating VEB Inter-VM VF-VF
> +=======================================
> +
> +Summary: DPDK PF, then create 2VFs and 2VMs, assign one VF to one VM,
> say VF1 in VM1, VF2 in VM2, and make PF link down(the cable can be pluged
> out). VFs in VMs are running dpdk testpmd, send traffic to VF1, and set
> the packet's DEST MAC to VF2, check if VF2 can receive the packets. Check
> Inter-VM VF-VF MAC switch when PF is link down as well as up.
> +
> +Details:
> +
> +1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
> +2. In the host, run testpmd with floating parameters and make the link
> down::
> +
> + ./testpmc -c 0xc0000 -n 4 --floating -- -i
> + testpmd> port stop all
> + testpmd> show port info all
> +
> +3. In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> + In VM2, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> +
> +4. Send 100 packets to VF1's MAC address, check if VF2 can get 100
> packets. Check the packet content is not corrupted. Also check the PF's
> port stats, and there should be no packets RX/TX at PF port.
> +
> +5. In the host, run testpmd with floating parameters and keep the link
> up, then do step3 and step4, PF should have no RX/TX packets even when
> link is up::
> +
> + ./testpmc -c 0xc0000 -n 4 --floating -- -i
> + testpmd> port start all
> + testpmd> show port info all
> +
> +
> +Test Case6: Floating VEB Inter-VM VF traffic can't be out of NIC
> +================================================================
> +
> +DPDK PF, then create 1VF, assign VF1 to VM1, send traffic from VF1 to
> outside world, then check outside world will not see any traffic.
> +
> +Details:
> +
> +1. Start VM1 with VF1, see the prerequisite part.
> +2. In the host, run testpmd with floating parameters.
> +
> + ./testpmc -c 0xc0000 -n 4 --floating -- -i
> +
> +3. In VM1, run testpmd, ::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr
> + testpmd>set fwd txonly
> + testpmd>start
> +
> +
> +4. At PF side, check the port stats to see if there is any RX/TX packets,
> and also check the traffic generator side(e.g: IXIA ports or another port
> connected to the DUT port) to ensure no packets.
> +
> +
> +Test Case7: Floating VEB VF-VF Performance
> +==========================================
> +
> +Testing VF-VF performance at different sizes(64B--1518B and jumbo frame-
> -3000B) with 100% rate sending traffic.
> \ No newline at end of file
> diff --git a/test_plans/vhost_tso_test_plan.rst
> b/test_plans/vhost_tso_test_plan.rst
> new file mode 100644
> index 0000000..f2b46e7
> --- /dev/null
> +++ b/test_plans/vhost_tso_test_plan.rst
> @@ -0,0 +1,130 @@
> +.. Copyright (c) <2015>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +===================
> +Vhost TSO Test Plan
> +===================
> +
> +The feature enabled the DPDK Vhost TX offload(checksum and TSO), so that
> it will let the NIC to do the TX offload, and it can improve performance.
> The feature added the negotiation between DPDK user space vhost and
> virtio-net, so we will verify the DPDK Vhost user + virtio-net for the
> TSO/cksum in the TCP/IP stack enabled environment. DPDK vhost + virtio-
> pmd will not be covered by this plan since virtio-pmd doesn't have TCP/IP
> stack and virtio TSO is not enabled, so it will not be tested.
> +
> +In the test plan, we will use vhost switch sample to test.
> +When testing vm2vm case, we will only test vm2vm=1(software switch), not
> test vm2vm=2(hardware switch).
> +
> +Prerequisites:
> +==============
> +
> +Install iperf on both host and guests.
> +
> +
> +Test Case1: DPDK vhost user + virtio-net one VM fwd tso
> +=======================================================
> +
> +HW preparation: Connect 2 ports directly. In our case, connect
> 81:00.0(port1) and 81:00.1(port2) two ports directly. Port1 is binded to
> igb_uio for vhost-sample to use, while port2 is in kernel driver.
> +
> +SW preparation: Change one line of the vhost sample and rebuild::
> +
> + #In function virtio_tx_route(xxx)
> + m->vlan_tci = vlan_tag;
> + #changed to
> + m->vlan_tci = 1000;
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1".::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 0 --tso 1 --csum 1
> +
> +2. Launch VM1::
> +
> + taskset -c 21-22 \
> + qemu-system-x86_64 -name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 1024 -object memory-backend-
> file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/vhost-tso-
> test/dpdk/vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,gues
> t_tso4=on,guest_tso6=on,guest_ecn=on \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic
> +
> +3. On host,configure port2, then you can see there is a interface called
> ens260f1.1000.::
> +
> + ifconfig ens260f1
> + vconfig add ens260f1 1000
> + ifconfig ens260f1.1000 1.1.1.8
> +
> +4. On the VM1, set the virtio IP and run iperf::
> +
> + ifconfig ethX 1.1.1.2
> + ping 1.1.1.8 # let virtio and port2 can ping each other successfully,
> then the arp table will be set up automatically.
> +
> +5. In host, run : `iperf –s –i 1` ; In guest, run `iperf –c 1.1.1.2 –i 1
> -t 60`, check if there is 64K (size: 65160) packet. If there is 64K
> packet, then TSO is enabled, or else TSO is disabled.
> +
> +6. On the VM1, run `tcpdump -i ethX -n -e -vv` to check if the cksum is
> correct. You should not see incorrect cksum output.
> +
> +Test Case2: DPDK vhost user + virtio-net VM2VM=1 fwd tso
> +========================================================
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1 --
> vm2vm 1".::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 1 --tso 1 --csum 1
> +
> +2. Launch VM1 and VM2. ::
> +
> + taskset -c 21-22 \
> + qemu-system-x86_64 -name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 1024 -object memory-backend-
> file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/vhost-tso-
> test/dpdk/vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,gues
> t_tso4=on,guest_tso6=on,guest_ecn=on \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic
> +
> + taskset -c 23-24 \
> + qemu-system-x86_64 -name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 1024 -object memory-backend-
> file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char1,path=/home/qxu10/vhost-tso-
> test/dpdk/vhost-net -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
> + -netdev tap,id=ipvm1,ifname=tap4,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:02 -nographic
> +
> +3. On VM1, set the virtio IP and run iperf
> +
> + ifconfig ethX 1.1.1.2
> + arp -s 1.1.1.8 52:54:00:00:00:02
> + arp # to check the arp table is complete and correct.
> +
> +4. On VM2, set the virtio IP and run iperf
> +
> + ifconfig ethX 1.1.1.8
> + arp -s 1.1.1.2 52:54:00:00:00:01
> + arp # to check the arp table is complete and correct.
> +
> +5. Ensure virtio1 can ping virtio2. Then in VM1, run : `iperf –s –i 1` ;
> In VM2, run `iperf –c 1.1.1.2 –i 1 -t 60`, check if there is 64K (size:
> 65160) packet. If there is 64K packet, then TSO is enabled, or else TSO
> is disabled.
> +
> +6. On the VM1, run `tcpdump -i ethX -n -e -vv`.
> +
> +
> \ No newline at end of file
> diff --git a/test_plans/virtio_1.0_test_plan.rst
> b/test_plans/virtio_1.0_test_plan.rst
> index 8412eac..727991b 100644
> --- a/test_plans/virtio_1.0_test_plan.rst
> +++ b/test_plans/virtio_1.0_test_plan.rst
> @@ -1,261 +1,261 @@
> -.. Copyright (c) <2015>, Intel Corporation
> - All rights reserved.
> -
> - Redistribution and use in source and binary forms, with or without
> - modification, are permitted provided that the following conditions
> - are met:
> -
> - - Redistributions of source code must retain the above copyright
> - notice, this list of conditions and the following disclaimer.
> -
> - - Redistributions in binary form must reproduce the above copyright
> - notice, this list of conditions and the following disclaimer in
> - the documentation and/or other materials provided with the
> - distribution.
> -
> - - Neither the name of Intel Corporation nor the names of its
> - contributors may be used to endorse or promote products derived
> - from this software without specific prior written permission.
> -
> - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> - OF THE POSSIBILITY OF SUCH DAMAGE.
> -
> -=================================
> -Virtio-1.0 Support Test Plan
> -=================================
> -
> -Virtio 1.0 is a new version of virtio. And the virtio 1.0 spec link is
> at http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.pdf. The
> major difference is at PCI layout. For testing virtio 1.0 pmd, we need
> test the basic RX/TX, different path(txqflags), mergeable on/off, and
> also test with virtio0.95 to ensure they can co-exist. Besides, we need
> test virtio 1.0's performance to ensure it has similar performance as
> virtio0.95.
> -
> -
> -Test Case1: test_func_vhost_user_virtio1.0-pmd with different txqflags
> -======================================================================
> -
> -Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> - We expect similar output as below, and see modern virtio pci
> detected.
> -
> - PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> - PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> - PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> - PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> - PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> - PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> - PMD: virtio_read_caps(): found modern virtio pci device.
> - PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> - PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> - PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> - PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> - PMD: vtpci_init(): modern virtio pci detected.
> -
> -
> -4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size.
> -
> -5. Also run the dpdk testpmd in VM with txqflags=0xf01 for the virtio
> pmd optimization usage::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags=0x0f01 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> -6. Send traffic to virtio1(MAC1=52:54:00:00:00:01) and VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check the packet content is correct.
> -
> -Test Case2: test_func_vhost_user_virtio1.0-pmd for packet sequence check
> -========================================================================
> -
> -Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> - We expect similar output as below, and see modern virtio pci
> detected.
> -
> - PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> - PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> - PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> - PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> - PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> - PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> - PMD: virtio_read_caps(): found modern virtio pci device.
> - PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> - PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> - PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> - PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> - PMD: vtpci_init(): modern virtio pci detected.
> -
> -
> -4. Send 100 packets at rate 25% at small packet(e.g: 70B) to the virtio
> with VLAN=1000, and insert the sequence number at byte offset 44 bytes.
> Make the sequence number starting from 00 00 00 00 and the step 1, first
> ensure no packet loss at IXIA, then check if the received packets have
> the same order as sending side.If out of order, then it's an issue.
> -
> -
> -Test Case3: test_func_vhost_user_virtio1.0-pmd with mergeable enabled
> -=====================================================================
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. Run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --max-pkt-len=9000
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> -4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check packet size(64-1518) as well as the
> jumbo frame(3000,9000) can be RX/TX.
> -
> -
> -Test Case4: test_func_vhost_user_one-vm-virtio1.0-one-vm-virtio0.95
> -===================================================================
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 1
> -
> -2. Start VM1 with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -3. Start VM2 with 1 virtio, note:
> -
> - taskset -c 24-25 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm2.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-
> modern=true \
> - -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm2,id=net1,mac=00:00:00:00:10:02 -nographic
> -
> -3. Run dpdk testpmd in VM1 and VM2::
> -
> - VM1:
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --eth-
> peer=0,52:54:00:00:00:02
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> - VM2:
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> -4. Send 100 packets at low rate to virtio1, and the expected flow is
> ixia-->NIC-->VHOST-->Virtio1-->Virtio2-->Vhost-->NIC->ixia port. Check
> the packet back at ixia port is content correct, no size change and
> payload change.
> -
> -Test Case5: test_perf_vhost_user_one-vm-virtio1.0-pmd
> -=====================================================
> -
> -Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. In the VM, run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> +.. Copyright (c) <2016>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +=================================
> +Virtio-1.0 Support Test Plan
> +=================================
> +
> +Virtio 1.0 is a new version of virtio. And the virtio 1.0 spec link is
> at http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.pdf. The
> major difference is at PCI layout. For testing virtio 1.0 pmd, we need
> test the basic RX/TX, different path(txqflags), mergeable on/off, and
> also test with virtio0.95 to ensure they can co-exist. Besides, we need
> test virtio 1.0's performance to ensure it has similar performance as
> virtio0.95.
> +
> +
> +Test Case1: test_func_vhost_user_virtio1.0-pmd with different txqflags
> +======================================================================
> +
> +Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> + We expect similar output as below, and see modern virtio pci
> detected.
> +
> + PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> + PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> + PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> + PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> + PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> + PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> + PMD: virtio_read_caps(): found modern virtio pci device.
> + PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> + PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> + PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> + PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> + PMD: vtpci_init(): modern virtio pci detected.
> +
> +
> +4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size.
> +
> +5. Also run the dpdk testpmd in VM with txqflags=0xf01 for the virtio
> pmd optimization usage::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags=0x0f01 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> +6. Send traffic to virtio1(MAC1=52:54:00:00:00:01) and VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check the packet content is correct.
> +
> +Test Case2: test_func_vhost_user_virtio1.0-pmd for packet sequence check
> +========================================================================
> +
> +Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> + We expect similar output as below, and see modern virtio pci
> detected.
> +
> + PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> + PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> + PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> + PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> + PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> + PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> + PMD: virtio_read_caps(): found modern virtio pci device.
> + PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> + PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> + PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> + PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> + PMD: vtpci_init(): modern virtio pci detected.
> +
> +
> +4. Send 100 packets at rate 25% at small packet(e.g: 70B) to the virtio
> with VLAN=1000, and insert the sequence number at byte offset 44 bytes.
> Make the sequence number starting from 00 00 00 00 and the step 1, first
> ensure no packet loss at IXIA, then check if the received packets have
> the same order as sending side.If out of order, then it's an issue.
> +
> +
> +Test Case3: test_func_vhost_user_virtio1.0-pmd with mergeable enabled
> +=====================================================================
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. Run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --max-pkt-len=9000
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> +4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check packet size(64-1518) as well as the
> jumbo frame(3000,9000) can be RX/TX.
> +
> +
> +Test Case4: test_func_vhost_user_one-vm-virtio1.0-one-vm-virtio0.95
> +===================================================================
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 1
> +
> +2. Start VM1 with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +3. Start VM2 with 1 virtio, note:
> +
> + taskset -c 24-25 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm2.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-
> modern=true \
> + -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm2,id=net1,mac=00:00:00:00:10:02 -nographic
> +
> +3. Run dpdk testpmd in VM1 and VM2::
> +
> + VM1:
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --eth-
> peer=0,52:54:00:00:00:02
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> + VM2:
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> +4. Send 100 packets at low rate to virtio1, and the expected flow is
> ixia-->NIC-->VHOST-->Virtio1-->Virtio2-->Vhost-->NIC->ixia port. Check
> the packet back at ixia port is content correct, no size change and
> payload change.
> +
> +Test Case5: test_perf_vhost_user_one-vm-virtio1.0-pmd
> +=====================================================
> +
> +Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. In the VM, run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> 4. Send traffic at line rate to virtio1(MAC1=52:54:00:00:00:01) with
> VLAN ID=1000. Check the performance at different packet
> size(68,128,256,512,1024,1280,1518) and record it as the performance data.
> The result should be similar as virtio0.95.
> \ No newline at end of file
> --
> 2.1.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dts] [PATCH] add 3 test plans for dpdk16.04
2016-03-18 6:59 ` Liu, Yong
@ 2016-03-23 1:47 ` Xu, Qian Q
0 siblings, 0 replies; 3+ messages in thread
From: Xu, Qian Q @ 2016-03-23 1:47 UTC (permalink / raw)
To: Liu, Yong, dts
Ok, will send it out later.
Thanks
Qian
-----Original Message-----
From: Liu, Yong
Sent: Friday, March 18, 2016 2:59 PM
To: Xu, Qian Q; dts@dpdk.org
Cc: Xu, Qian Q
Subject: RE: [dts] [PATCH] add 3 test plans for dpdk16.04
Hi Qian,
Please separated this patch for those three test plans have no relations to each other.
> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Qian Xu
> Sent: Wednesday, March 16, 2016 9:38 AM
> To: dts@dpdk.org
> Cc: Xu, Qian Q
> Subject: [dts] [PATCH] add 3 test plans for dpdk16.04
>
> The 3 test plans have been reviewed with developers.
>
> Signed-off-by: Qian Xu <qian.q.xu@intel.com>
>
> diff --git a/test_plans/veb_switch_test_plan.rst
> b/test_plans/veb_switch_test_plan.rst
> new file mode 100644
> index 0000000..4629c21
> --- /dev/null
> +++ b/test_plans/veb_switch_test_plan.rst
> @@ -0,0 +1,268 @@
> +.. Copyright (c) <2016>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +=====================================
> +VEB Switch and floating VEB Test Plan
> +=====================================
> +
> +VEB Switching Introduction
> +==========================
> +
> +IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11/evb-
> tutorial-draft-20091116_v09.pdf
> +
> +Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a
> VLAN Bridge internal to Fortville that bridges the traffic of multiple
> VSIs over an internal virtual network.
> +
> +Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A
> VEPA multiplexes the traffic of one or more VSIs onto a single Fortville
> Ethernet port. The biggest difference between a VEB and a VEPA is that a
> VEB can switch packets internally between VSIs, whereas a VEPA cannot.
> +
> +Virtual Station Interface (VSI) - This is an IEEE EVB term that defines
> the properties of a virtual machine's (or a physical machine's)
> connection to the network. Each downstream v-port on a Fortville VEB or
> VEPA defines a VSI. A standards-based definition of VSI properties
> enables network management tools to perform virtual machine migration and
> associated network re-configuration in a vendor-neutral manner.
> +
> +My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it
> can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC
> internal switch. It's similar as Niantic's SRIOV switch.
> +
> +Floating VEB Introduction
> +=========================
> +
> +Floating VEB is based on VEB Switching. It will address 2 problems:
> +
> +Dependency on PF: When the physical port is link down, the functionality
> of the VEB/VEPA will not work normally. Even only data forwarding between
> the VF is required, one PF port will be wasted to create the related VEB.
> +
> +Ensure all the traffic from VF can only forwarding within the VFs
> connect to the floating VEB, cannot forward out of the NIC port.
> +
> +Prerequisites for VEB testing
> +=============================
> +
> +1. Get the pci device id of DUT, for example::
> +
> + ./dpdk_nic_bind.py --st
> +
> + 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0
> drv=i40e unused=
> +
> +2.1 Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver,
> and set the VF MAC address at PF0::
> +
> + echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
> + ./dpdk_nic_bind.py --st
> +
> + 0000:81:02.0 'XL710/X710 Virtual Function' unused=
> + 0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> + ip link set ens259f0 vf 0 mac 00:11:22:33:44:11
> + ip link set ens259f0 vf 1 mac 00:11:22:33:44:12
> +
> +2.2 Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver.
> +
> + ./dpdk_nic_bind.py -b igb_uio 81:00.0
> + echo 2 >/sys/bus/pci/devices/0000:81:00.0/max_vfs
> + ./dpdk_nic_bind.py --st
> +
> +3. Detach VFs from the host, bind them to pci-stub driver::
> +
> + modprobe pci-stub
> +
> + using `lspci -nn|grep -i ethernet` got VF device id, for example
> "8086 154c",
> +
> + echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
> + echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind
> + echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind
> +
> + echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
> + echo 0000:81:02.1 > /sys/bus/pci/devices/0000:08:02.1/driver/unbind
> + echo 0000:81:02.1 > /sys/bus/pci/drivers/pci-stub/bind
> +
> +4. Lauch the VM with the VF PCI passthrough.
> +
> + taskset -c 18-19 qemu-system-x86_64 \
> + -mem-path /mnt/huge -mem-prealloc \
> + -enable-kvm -m 2048 -smp cores=2,sockets=1 -cpu host -name dpdk1-
> vm1 \
> + -device pci-assign,host=81:02.0 \
> + -drive file=/home/img/vm1.img \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:11:01 \
> + -localtime -vnc :22 -daemonize
> +
> +
> +Test Case1: VEB Switching Inter-VM VF-VF MAC switch
> +===================================================
> +
> +Summary: Kernel PF, then create 2VFs and 2VMs, assign one VF to one VM,
> say VF1 in VM1, VF2 in VM2. VFs in VMs are running dpdk testpmd, send
> traffic to VF1, and set the packet's DEST MAC to VF2, check if VF2 can
> receive the packets. Check Inter-VM VF-VF MAC switch.
> +
> +Details::
> +
> +1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
> +2. In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> + In VM2, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> +
> +3. Send 100 packets to VF1's MAC address, check if VF2 can get 100
> packets. Check the packet content is not corrupted.
> +
> +Test Case2: VEB Switching Inter-VM VF-VF MAC/VLAN switch
> +========================================================
> +
> +Summary: Kernel PF, then create 2VFs and 2VMs, assign VF1 with VLAN=1 in
> VM1, VF2 with VLAN=2 in VM2. VFs in VMs are running dpdk testpmd, send
> traffic to VF1 with VLAN=1, then let it forwards to VF2, it should not
> work since they are not in the same VLAN; set VF2 with VLAN=1, then send
> traffic to VF1 with VLAN=1, and VF2 can receive the packets. Check inter-
> VM VF-VF MAC/VLAN switch.
> +
> +Details:
> +
> +1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
> +
> +2. Set the VLAN id of VF1 and VF2::
> +
> + ip link set ens259f0 vf 0 vlan 1
> + ip link set ens259f0 vf 1 vlan 2
> +
> +3. In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
> + testpmd>set mac fwd
> + testpmd>set promisc all off
> + testpmd>start
> +
> + In VM2, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc all off
> + testpmd>start
> +
> +
> +4. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2
> can't get 100 packets since they are not in the same VLAN.
> +
> +5. Change the VLAN id of VF2::
> +
> + ip link set ens259f0 vf 1 vlan 1
> +
> +6. Send 100 packets with VF1's MAC address and VLAN=1, check if VF2 can
> get 100 packets since they are in the same VLAN now. Check the packet
> content is not corrupted.
> +
> +Test Case3: VEB Switching Inter-VM PF-VF MAC switch
> +===================================================
> +
> +Summary: DPDK PF, then create 1VF, assign VF1 to VM1, PF in the host
> running dpdk traffic, send traffic from PF to VF1, ensure PF->VF1(let VF1
> in promisc mode); send traffic from VF1 to PF, ensure VF1->PF can work.
> +
> +Details:
> +
> +1. Start VM1 with VF1, see the prerequisite part.
> +
> +3. In host, launch testpmd::
> +
> + ./testpmd -c 0xc0000 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc all on
> + testpmd>start
> +
> + In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr (Note: this will
> let VF1 forwards packets to PF)
> + testpmd>set mac fwd
> + testpmd>set promisc all on
> + testpmd>start
> +
> +4. Send 100 packets with VF1's MAC address, check if PF can get 100
> packets, so VF1->PF is working. Check the packet content is not corrupted.
> +
> +5. Remove "--eth-peer" in VM1 testpmd commands, then send 100 packets
> with PF's MAC address, check if VF1 can get 100 packets, so PF->VF1 is
> working. Check the packet content is not corrupted.
> +
> +
> +Test Case4: VEB Switching Inter-VM PF-VF/VF-VF MAC switch Performance
> +=====================================================================
> +
> +Performance testing, repeat Testcase1(VF-VF) and Testcase3(PF-VF) to
> check the performance at different sizes(64B--1518B and jumbo frame--
> 3000B) with 100% rate sending traffic.
> +
> +Test Case5: Floating VEB Inter-VM VF-VF
> +=======================================
> +
> +Summary: DPDK PF, then create 2VFs and 2VMs, assign one VF to one VM,
> say VF1 in VM1, VF2 in VM2, and make PF link down(the cable can be pluged
> out). VFs in VMs are running dpdk testpmd, send traffic to VF1, and set
> the packet's DEST MAC to VF2, check if VF2 can receive the packets. Check
> Inter-VM VF-VF MAC switch when PF is link down as well as up.
> +
> +Details:
> +
> +1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.
> +2. In the host, run testpmd with floating parameters and make the link
> down::
> +
> + ./testpmc -c 0xc0000 -n 4 --floating -- -i
> + testpmd> port stop all
> + testpmd> show port info all
> +
> +3. In VM1, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,00:11:22:33:44:12
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> + In VM2, run testpmd::
> +
> + ./testpmd -c 0x3 -n 4 -- -i
> + testpmd>set mac fwd
> + testpmd>set promisc off all
> + testpmd>start
> +
> +
> +4. Send 100 packets to VF1's MAC address, check if VF2 can get 100
> packets. Check the packet content is not corrupted. Also check the PF's
> port stats, and there should be no packets RX/TX at PF port.
> +
> +5. In the host, run testpmd with floating parameters and keep the link
> up, then do step3 and step4, PF should have no RX/TX packets even when
> link is up::
> +
> + ./testpmc -c 0xc0000 -n 4 --floating -- -i
> + testpmd> port start all
> + testpmd> show port info all
> +
> +
> +Test Case6: Floating VEB Inter-VM VF traffic can't be out of NIC
> +================================================================
> +
> +DPDK PF, then create 1VF, assign VF1 to VM1, send traffic from VF1 to
> outside world, then check outside world will not see any traffic.
> +
> +Details:
> +
> +1. Start VM1 with VF1, see the prerequisite part.
> +2. In the host, run testpmd with floating parameters.
> +
> + ./testpmc -c 0xc0000 -n 4 --floating -- -i
> +
> +3. In VM1, run testpmd, ::
> +
> + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr
> + testpmd>set fwd txonly
> + testpmd>start
> +
> +
> +4. At PF side, check the port stats to see if there is any RX/TX packets,
> and also check the traffic generator side(e.g: IXIA ports or another port
> connected to the DUT port) to ensure no packets.
> +
> +
> +Test Case7: Floating VEB VF-VF Performance
> +==========================================
> +
> +Testing VF-VF performance at different sizes(64B--1518B and jumbo frame-
> -3000B) with 100% rate sending traffic.
> \ No newline at end of file
> diff --git a/test_plans/vhost_tso_test_plan.rst
> b/test_plans/vhost_tso_test_plan.rst
> new file mode 100644
> index 0000000..f2b46e7
> --- /dev/null
> +++ b/test_plans/vhost_tso_test_plan.rst
> @@ -0,0 +1,130 @@
> +.. Copyright (c) <2015>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +===================
> +Vhost TSO Test Plan
> +===================
> +
> +The feature enabled the DPDK Vhost TX offload(checksum and TSO), so that
> it will let the NIC to do the TX offload, and it can improve performance.
> The feature added the negotiation between DPDK user space vhost and
> virtio-net, so we will verify the DPDK Vhost user + virtio-net for the
> TSO/cksum in the TCP/IP stack enabled environment. DPDK vhost + virtio-
> pmd will not be covered by this plan since virtio-pmd doesn't have TCP/IP
> stack and virtio TSO is not enabled, so it will not be tested.
> +
> +In the test plan, we will use vhost switch sample to test.
> +When testing vm2vm case, we will only test vm2vm=1(software switch), not
> test vm2vm=2(hardware switch).
> +
> +Prerequisites:
> +==============
> +
> +Install iperf on both host and guests.
> +
> +
> +Test Case1: DPDK vhost user + virtio-net one VM fwd tso
> +=======================================================
> +
> +HW preparation: Connect 2 ports directly. In our case, connect
> 81:00.0(port1) and 81:00.1(port2) two ports directly. Port1 is binded to
> igb_uio for vhost-sample to use, while port2 is in kernel driver.
> +
> +SW preparation: Change one line of the vhost sample and rebuild::
> +
> + #In function virtio_tx_route(xxx)
> + m->vlan_tci = vlan_tag;
> + #changed to
> + m->vlan_tci = 1000;
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1".::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 0 --tso 1 --csum 1
> +
> +2. Launch VM1::
> +
> + taskset -c 21-22 \
> + qemu-system-x86_64 -name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 1024 -object memory-backend-
> file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/vhost-tso-
> test/dpdk/vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,gues
> t_tso4=on,guest_tso6=on,guest_ecn=on \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic
> +
> +3. On host,configure port2, then you can see there is a interface called
> ens260f1.1000.::
> +
> + ifconfig ens260f1
> + vconfig add ens260f1 1000
> + ifconfig ens260f1.1000 1.1.1.8
> +
> +4. On the VM1, set the virtio IP and run iperf::
> +
> + ifconfig ethX 1.1.1.2
> + ping 1.1.1.8 # let virtio and port2 can ping each other successfully,
> then the arp table will be set up automatically.
> +
> +5. In host, run : `iperf –s –i 1` ; In guest, run `iperf –c 1.1.1.2 –i 1
> -t 60`, check if there is 64K (size: 65160) packet. If there is 64K
> packet, then TSO is enabled, or else TSO is disabled.
> +
> +6. On the VM1, run `tcpdump -i ethX -n -e -vv` to check if the cksum is
> correct. You should not see incorrect cksum output.
> +
> +Test Case2: DPDK vhost user + virtio-net VM2VM=1 fwd tso
> +========================================================
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1. For TSO/CSUM test, we need set "--mergeable 1--tso 1 --csum 1 --
> vm2vm 1".::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 1 --tso 1 --csum 1
> +
> +2. Launch VM1 and VM2. ::
> +
> + taskset -c 21-22 \
> + qemu-system-x86_64 -name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 1024 -object memory-backend-
> file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/vhost-tso-
> test/dpdk/vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,gues
> t_tso4=on,guest_tso6=on,guest_ecn=on \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic
> +
> + taskset -c 23-24 \
> + qemu-system-x86_64 -name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 1024 -object memory-backend-
> file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char1,path=/home/qxu10/vhost-tso-
> test/dpdk/vhost-net -netdev type=vhost-
> user,id=mynet2,chardev=char1,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \
> + -netdev tap,id=ipvm1,ifname=tap4,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:02 -nographic
> +
> +3. On VM1, set the virtio IP and run iperf
> +
> + ifconfig ethX 1.1.1.2
> + arp -s 1.1.1.8 52:54:00:00:00:02
> + arp # to check the arp table is complete and correct.
> +
> +4. On VM2, set the virtio IP and run iperf
> +
> + ifconfig ethX 1.1.1.8
> + arp -s 1.1.1.2 52:54:00:00:00:01
> + arp # to check the arp table is complete and correct.
> +
> +5. Ensure virtio1 can ping virtio2. Then in VM1, run : `iperf –s –i 1` ;
> In VM2, run `iperf –c 1.1.1.2 –i 1 -t 60`, check if there is 64K (size:
> 65160) packet. If there is 64K packet, then TSO is enabled, or else TSO
> is disabled.
> +
> +6. On the VM1, run `tcpdump -i ethX -n -e -vv`.
> +
> +
> \ No newline at end of file
> diff --git a/test_plans/virtio_1.0_test_plan.rst
> b/test_plans/virtio_1.0_test_plan.rst
> index 8412eac..727991b 100644
> --- a/test_plans/virtio_1.0_test_plan.rst
> +++ b/test_plans/virtio_1.0_test_plan.rst
> @@ -1,261 +1,261 @@
> -.. Copyright (c) <2015>, Intel Corporation
> - All rights reserved.
> -
> - Redistribution and use in source and binary forms, with or without
> - modification, are permitted provided that the following conditions
> - are met:
> -
> - - Redistributions of source code must retain the above copyright
> - notice, this list of conditions and the following disclaimer.
> -
> - - Redistributions in binary form must reproduce the above copyright
> - notice, this list of conditions and the following disclaimer in
> - the documentation and/or other materials provided with the
> - distribution.
> -
> - - Neither the name of Intel Corporation nor the names of its
> - contributors may be used to endorse or promote products derived
> - from this software without specific prior written permission.
> -
> - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> - OF THE POSSIBILITY OF SUCH DAMAGE.
> -
> -=================================
> -Virtio-1.0 Support Test Plan
> -=================================
> -
> -Virtio 1.0 is a new version of virtio. And the virtio 1.0 spec link is
> at http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.pdf. The
> major difference is at PCI layout. For testing virtio 1.0 pmd, we need
> test the basic RX/TX, different path(txqflags), mergeable on/off, and
> also test with virtio0.95 to ensure they can co-exist. Besides, we need
> test virtio 1.0's performance to ensure it has similar performance as
> virtio0.95.
> -
> -
> -Test Case1: test_func_vhost_user_virtio1.0-pmd with different txqflags
> -======================================================================
> -
> -Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> - We expect similar output as below, and see modern virtio pci
> detected.
> -
> - PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> - PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> - PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> - PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> - PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> - PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> - PMD: virtio_read_caps(): found modern virtio pci device.
> - PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> - PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> - PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> - PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> - PMD: vtpci_init(): modern virtio pci detected.
> -
> -
> -4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size.
> -
> -5. Also run the dpdk testpmd in VM with txqflags=0xf01 for the virtio
> pmd optimization usage::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags=0x0f01 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> -6. Send traffic to virtio1(MAC1=52:54:00:00:00:01) and VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check the packet content is correct.
> -
> -Test Case2: test_func_vhost_user_virtio1.0-pmd for packet sequence check
> -========================================================================
> -
> -Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> - We expect similar output as below, and see modern virtio pci
> detected.
> -
> - PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> - PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> - PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> - PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> - PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> - PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> - PMD: virtio_read_caps(): found modern virtio pci device.
> - PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> - PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> - PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> - PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> - PMD: vtpci_init(): modern virtio pci detected.
> -
> -
> -4. Send 100 packets at rate 25% at small packet(e.g: 70B) to the virtio
> with VLAN=1000, and insert the sequence number at byte offset 44 bytes.
> Make the sequence number starting from 00 00 00 00 and the step 1, first
> ensure no packet loss at IXIA, then check if the received packets have
> the same order as sending side.If out of order, then it's an issue.
> -
> -
> -Test Case3: test_func_vhost_user_virtio1.0-pmd with mergeable enabled
> -=====================================================================
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. Run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --max-pkt-len=9000
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> -4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check packet size(64-1518) as well as the
> jumbo frame(3000,9000) can be RX/TX.
> -
> -
> -Test Case4: test_func_vhost_user_one-vm-virtio1.0-one-vm-virtio0.95
> -===================================================================
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 1
> -
> -2. Start VM1 with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -3. Start VM2 with 1 virtio, note:
> -
> - taskset -c 24-25 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm2.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-
> modern=true \
> - -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm2,id=net1,mac=00:00:00:00:10:02 -nographic
> -
> -3. Run dpdk testpmd in VM1 and VM2::
> -
> - VM1:
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --eth-
> peer=0,52:54:00:00:00:02
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> - VM2:
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> -4. Send 100 packets at low rate to virtio1, and the expected flow is
> ixia-->NIC-->VHOST-->Virtio1-->Virtio2-->Vhost-->NIC->ixia port. Check
> the packet back at ixia port is content correct, no size change and
> payload change.
> -
> -Test Case5: test_perf_vhost_user_one-vm-virtio1.0-pmd
> -=====================================================
> -
> -Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> -
> -1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> -
> - taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> -
> -2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> -
> - taskset -c 22-23 \
> - /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> - -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> - -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> - -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> - -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> -
> -
> -3. In the VM, run dpdk testpmd in VM::
> -
> - ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> -
> - ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> -
> - $ >set fwd mac
> -
> - $ >start tx_first
> -
> +.. Copyright (c) <2016>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +=================================
> +Virtio-1.0 Support Test Plan
> +=================================
> +
> +Virtio 1.0 is a new version of virtio. And the virtio 1.0 spec link is
> at http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.pdf. The
> major difference is at PCI layout. For testing virtio 1.0 pmd, we need
> test the basic RX/TX, different path(txqflags), mergeable on/off, and
> also test with virtio0.95 to ensure they can co-exist. Besides, we need
> test virtio 1.0's performance to ensure it has similar performance as
> virtio0.95.
> +
> +
> +Test Case1: test_func_vhost_user_virtio1.0-pmd with different txqflags
> +======================================================================
> +
> +Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> + We expect similar output as below, and see modern virtio pci
> detected.
> +
> + PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> + PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> + PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> + PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> + PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> + PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> + PMD: virtio_read_caps(): found modern virtio pci device.
> + PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> + PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> + PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> + PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> + PMD: vtpci_init(): modern virtio pci detected.
> +
> +
> +4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size.
> +
> +5. Also run the dpdk testpmd in VM with txqflags=0xf01 for the virtio
> pmd optimization usage::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags=0x0f01 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> +6. Send traffic to virtio1(MAC1=52:54:00:00:00:01) and VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check the packet content is correct.
> +
> +Test Case2: test_func_vhost_user_virtio1.0-pmd for packet sequence check
> +========================================================================
> +
> +Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. In the VM, change the config file--common_linuxapp,
> "CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y"; Run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> + We expect similar output as below, and see modern virtio pci
> detected.
> +
> + PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11
> + PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len:
> 0
> + PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len:
> 4194304
> + PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len:
> 4096
> + PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len:
> 4096
> + PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len:
> 4096
> + PMD: virtio_read_caps(): found modern virtio pci device.
> + PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000
> + PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000
> + PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000
> + PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off
> multiplier: 409
> 6
> + PMD: vtpci_init(): modern virtio pci detected.
> +
> +
> +4. Send 100 packets at rate 25% at small packet(e.g: 70B) to the virtio
> with VLAN=1000, and insert the sequence number at byte offset 44 bytes.
> Make the sequence number starting from 00 00 00 00 and the step 1, first
> ensure no packet loss at IXIA, then check if the received packets have
> the same order as sending side.If out of order, then it's an issue.
> +
> +
> +Test Case3: test_func_vhost_user_virtio1.0-pmd with mergeable enabled
> +=====================================================================
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. Run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --max-pkt-len=9000
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> +4. Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000.
> Check if virtio packet can be RX/TX and also check the TX packet size is
> same as the RX packet size. Check packet size(64-1518) as well as the
> jumbo frame(3000,9000) can be RX/TX.
> +
> +
> +Test Case4: test_func_vhost_user_one-vm-virtio1.0-one-vm-virtio0.95
> +===================================================================
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 1
> +
> +2. Start VM1 with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +3. Start VM2 with 1 virtio, note:
> +
> + taskset -c 24-25 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm2.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-
> modern=true \
> + -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm2,id=net1,mac=00:00:00:00:10:02 -nographic
> +
> +3. Run dpdk testpmd in VM1 and VM2::
> +
> + VM1:
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan --eth-
> peer=0,52:54:00:00:00:02
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> + VM2:
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> +4. Send 100 packets at low rate to virtio1, and the expected flow is
> ixia-->NIC-->VHOST-->Virtio1-->Virtio2-->Vhost-->NIC->ixia port. Check
> the packet back at ixia port is content correct, no size change and
> payload change.
> +
> +Test Case5: test_perf_vhost_user_one-vm-virtio1.0-pmd
> +=====================================================
> +
> +Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1
> or 2.5.0.
> +
> +1. Launch the Vhost sample by below commands, socket-mem is set for the
> vhost sample to use, need ensure that the PCI port located socket has the
> memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for
> socket1.::
> +
> + taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n
> 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-
> copy 0 --vm2vm 0
> +
> +2. Start VM with 1 virtio, note: we need add "disable-modern=false" to
> enable virtio 1.0.
> +
> + taskset -c 22-23 \
> + /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 \
> + -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem
> -mem-prealloc \
> + -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \
> + -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-
> modern=false \
> + -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device
> rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
> +
> +
> +3. In the VM, run dpdk testpmd in VM::
> +
> + ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0
> +
> + ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c
> 0x3 -n 4 -- -i --txqflags 0x0f00 --disable-hw-vlan
> +
> + $ >set fwd mac
> +
> + $ >start tx_first
> +
> 4. Send traffic at line rate to virtio1(MAC1=52:54:00:00:00:01) with
> VLAN ID=1000. Check the performance at different packet
> size(68,128,256,512,1024,1280,1518) and record it as the performance data.
> The result should be similar as virtio0.95.
> \ No newline at end of file
> --
> 2.1.0
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2016-03-23 1:47 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-16 1:38 [dts] [PATCH] add 3 test plans for dpdk16.04 Qian Xu
2016-03-18 6:59 ` Liu, Yong
2016-03-23 1:47 ` Xu, Qian Q
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).