From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 46D492B9C for ; Wed, 4 Jan 2017 03:24:24 +0100 (CET) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP; 03 Jan 2017 18:24:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,458,1477983600"; d="scan'208";a="45504742" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga004.jf.intel.com with ESMTP; 03 Jan 2017 18:24:23 -0800 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 3 Jan 2017 18:24:22 -0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.20]) by shsmsx102.ccr.corp.intel.com ([169.254.2.88]) with mapi id 14.03.0248.002; Wed, 4 Jan 2017 10:24:20 +0800 From: "Liu, Yong" To: "Peng, Yuan" , "dts@dpdk.org" CC: "Peng, Yuan" Thread-Topic: [dts] [PATCH] test_plans: modify the format of veb switch test plan Thread-Index: AQHSYj5NFaw2zT1ZdE2Fk7uqHq57gqEnnsdw Date: Wed, 4 Jan 2017 02:24:19 +0000 Message-ID: <86228AFD5BCD8E4EBFD2B90117B5E81E62D38928@SHSMSX103.ccr.corp.intel.com> References: <1483062335-22388-1-git-send-email-yuan.peng@intel.com> In-Reply-To: <1483062335-22388-1-git-send-email-yuan.peng@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dts] [PATCH] test_plans: modify the format of veb switch test plan X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jan 2017 02:24:26 -0000 Thanks, applied. -----Original Message----- From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Peng Yuan Sent: Friday, December 30, 2016 9:46 AM To: dts@dpdk.org Cc: Peng, Yuan Subject: [dts] [PATCH] test_plans: modify the format of veb switch test pla= n From: peng yuan Signed-off-by: peng yuan diff --git a/test_plans/veb_switch_test_plan.rst b/test_plans/veb_switch_te= st_plan.rst index 7cb3790..ca23df6 100644 --- a/test_plans/veb_switch_test_plan.rst +++ b/test_plans/veb_switch_test_plan.rst @@ -1,5 +1,5 @@ .. Copyright (c) <2016>, Intel Corporation - All rights reserved. + All rights reserved. =20 Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions @@ -= 37,232 +37,203 @@ VEB Switch and floating VEB Test Plan VEB Switching Intr= oduction =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D =20 -IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11/evb-tutori= al-draft-20091116_v09.pdf +IEEE EVB tutorial:=20 +http://www.ieee802.org/802_tutorials/2009-11 +/evb-tutorial-draft-20091116_v09.pdf =20 -Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a VLAN = Bridge internal to Fortville that bridges the traffic of multiple VSIs over= an internal virtual network.=20 +Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a=20 +VLAN Bridge internal to Fortville that bridges the traffic of multiple=20 +VSIs over an internal virtual network. =20 -Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A VEPA= multiplexes the traffic of one or more VSIs onto a single Fortville Ethern= et port. The biggest difference between a VEB and a VEPA is that a VEB can = switch packets internally between VSIs, whereas a VEPA cannot.=20 +Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A=20 +VEPA multiplexes the traffic of one or more VSIs onto a single=20 +Fortville Ethernet port. The biggest difference between a VEB and a=20 +VEPA is that a VEB can switch packets internally between VSIs, whereas a V= EPA cannot. =20 -Virtual Station Interface (VSI) - This is an IEEE EVB term that defines th= e properties of a virtual machine's (or a physical machine's) connection to= the network. Each downstream v-port on a Fortville VEB or VEPA defines a V= SI. A standards-based definition of VSI properties enables network manageme= nt tools to perform virtual machine migration and associated network re-con= figuration in a vendor-neutral manner. +Virtual Station Interface (VSI) - This is an IEEE EVB term that defines=20 +the properties of a virtual machine's (or a physical machine's)=20 +connection to the network. Each downstream v-port on a Fortville VEB or=20 +VEPA defines a VSI. A standards-based definition of VSI properties=20 +enables network management tools to perform virtual machine migration=20 +and associated network re-configuration in a vendor-neutral manner. =20 -My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it ca= n support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC int= ernal switch. It's similar as Niantic's SRIOV switch. - -Floating VEB Introduction -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D - -Floating VEB is based on VEB Switching. It will address 2 problems: - -Dependency on PF: When the physical port is link down, the functionality o= f the VEB/VEPA will not work normally. Even only data forwarding between th= e VF is required, one PF port will be wasted to create the related VEB. - -Ensure all the traffic from VF can only forwarding within the VFs connect = to the floating VEB, cannot forward out of the NIC port. +My understanding of VEB is that it's an in-NIC switch(MAC/VLAN), and it=20 +can support VF->VF, PF->VF, VF->PF packet forwarding according to the=20 +NIC internal switch. It's similar as Niantic's SRIOV switch. =20 Prerequisites for VEB testing =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D =20 1. Get the pci device id of DUT, for example:: =20 - ./dpdk_nic_bind.py --st - - 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=3Dens259f0 d= rv=3Di40e unused=3D + ./dpdk-devbind.py --st + 0000:05:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=3Dens785f0 d= rv=3Di40e=20 + unused=3D =20 -2.1 Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver, = and set the VF MAC address at PF0:: +2.1 Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver,= =20 + and set the VF MAC address at PF:: =20 - echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs - ./dpdk_nic_bind.py --st + echo 2 > /sys/bus/pci/devices/0000\:05\:00.0/sriov_numvfs + ./dpdk-devbind.py --st =20 - 0000:81:02.0 'XL710/X710 Virtual Function' unused=3D - 0000:81:02.1 'XL710/X710 Virtual Function' unused=3D + 0000:05:02.0 'XL710/X710 Virtual Function' unused=3D + 0000:05:02.1 'XL710/X710 Virtual Function' unused=3D =20 - ip link set ens259f0 vf 0 mac 00:11:22:33:44:11 - ip link set ens259f0 vf 1 mac 00:11:22:33:44:12 + ip link set ens785f0 vf 0 mac 00:11:22:33:44:11 + ip link set ens785f0 vf 1 mac 00:11:22:33:44:12 =20 -2.2 Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver.=20 +2.2 Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver::=20 =20 - ./dpdk_nic_bind.py -b igb_uio 81:00.0=20 - echo 2 >/sys/bus/pci/devices/0000:81:00.0/max_vfs - ./dpdk_nic_bind.py --st - -3. Detach VFs from the host, bind them to pci-stub driver:: + ./dpdk-devbind.py -b igb_uio 05:00.0=20 + echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs + ./dpdk-devbind.py --st + 0000:05:02.0 'XL710/X710 Virtual Function' unused=3Di40evf,igb_uio + 0000:05:02.1 'XL710/X710 Virtual Function' unused=3Di40evf,igb_uio =20 - modprobe pci-stub +3. Bind the VFs to dpdk driver:: =20 - using `lspci -nn|grep -i ethernet` got VF device id, for example "8086= 154c", + ./tools/dpdk-devbind.py -b igb_uio 05:02.0 05:02.1 =20 - echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id - echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind - echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind +4. Reserve huge pages memory(before using DPDK):: =20 - echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id - echo 0000:81:02.1 > /sys/bus/pci/devices/0000:08:02.1/driver/unbind - echo 0000:81:02.1 > /sys/bus/pci/drivers/pci-stub/bind + echo 4096 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB + /nr_hugepages=20 + mkdir /mnt/huge =20 + mount -t hugetlbfs nodev /mnt/huge =20 =20 -4. Lauch the VM with the VF PCI passthrough.=20 - - taskset -c 18-19 qemu-system-x86_64 \ - -mem-path /mnt/huge -mem-prealloc \ - -enable-kvm -m 2048 -smp cores=3D2,sockets=3D1 -cpu host -name dpdk1-= vm1 \ - -device pci-assign,host=3D81:02.0 \ - -drive file=3D/home/img/vm1.img \ - -netdev tap,id=3Dipvm1,ifname=3Dtap3,script=3D/etc/qemu-ifup -device = rtl8139,netdev=3Dipvm1,id=3Dnet0,mac=3D00:00:00:00:11:01 \ - -localtime -vnc :22 -daemonize -=20 =20 -Test Case1: VEB Switching Inter-VM VF-VF MAC switch +Test Case1: VEB Switching Inter VF-VF MAC switch =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D =20 -Summary: Kernel PF, then create 2VFs and 2VMs, assign one VF to one VM, sa= y VF1 in VM1, VF2 in VM2. VFs in VMs are running dpdk testpmd, send traffic= to VF1, and set the packet's DEST MAC to VF2, check if VF2 can receive the= packets. Check Inter-VM VF-VF MAC switch. - +Summary: Kernel PF, then create 2VFs. VFs running dpdk testpmd, send=20 +traffic to VF1, and set the packet's DEST MAC to VF2, check if VF2 can=20 +receive the packets. Check Inter VF-VF MAC switch. +=20 Details:: =20 -1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.=20 -2. In VM1, run testpmd:: +1. In VF1, run testpmd:: =20 - ./testpmd -c 0x3 -n 4 -- -i --eth-peer=3D0,00:11:22:33:44:12 - testpmd>set mac fwd - testpmd>set promisc off all + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem 1024,= 1024 + -w 05:02.0 --file-prefix=3Dtest1 -- -i --crc-strip --eth-peer=3D0,00:11= :22:33:44:12 + testpmd>set fwd mac + testpmd>set promisc all off testpmd>start =20 - In VM2, run testpmd:: + In VF2, run testpmd:: =20 - ./testpmd -c 0x3 -n 4 -- -i=20 - testpmd>set mac fwd - testpmd>set promisc off all + ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xa -n 4 --socket-mem 1024,= 1024 + -w 05:02.1 --file-prefix=3Dtest2 -- -i --crc-strip + testpmd>set fwd mac + testpmd>set promisc all off testpmd>start =20 =20 -3. Send 100 packets to VF1's MAC address, check if VF2 can get 100 packets= . Check the packet content is not corrupted. +2. Send 100 packets to VF1's MAC address, check if VF2 can get 100 packets= .=20 +Check the packet content is no corrupted. =20 -Test Case2: VEB Switching Inter-VM VF-VF MAC/VLAN switch +Test Case2: VEB Switching Inter VF-VF MAC/VLAN switch =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D =20 -Summary: Kernel PF, then create 2VFs and 2VMs, assign VF1 with VLAN=3D1 in= VM1, VF2 with VLAN=3D2 in VM2. VFs in VMs are running dpdk testpmd, send t= raffic to VF1 with VLAN=3D1, then let it forwards to VF2, it should not wor= k since they are not in the same VLAN; set VF2 with VLAN=3D1, then send tra= ffic to VF1 with VLAN=3D1, and VF2 can receive the packets. Check inter-VM = VF-VF MAC/VLAN switch. +Summary: Kernel PF, then create 2VFs, assign VF1 with VLAN=3D1 in, VF2=20 +with VLAN=3D2. VFs are running dpdk testpmd, send traffic to VF1 with=20 +VLAN=3D1, then let it forwards to VF2,it should not work since they are=20 +not in the same VLAN; set VF2 with VLAN=3D1, then send traffic to VF1=20 +with VLAN=3D1, and VF2 can receive the packets. Check inter VF MAC/VLAN sw= itch. =20 Details:=20 +1. Set the VLAN id of VF1 and VF2::=20 =20 -1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.=20 + ip link set ens785f0 vf 0 vlan 1 + ip link set ens785f0 vf 1 vlan 2 =20 -2. Set the VLAN id of VF1 and VF2::=20 +2. In VF1, run testpmd:: =20 - ip link set ens259f0 vf 0 vlan 1 - ip link set ens259f0 vf 1 vlan 2=20 - -3. In VM1, run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --eth-peer=3D0,00:11:22:33:44:12 - testpmd>set mac fwd + ./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:02.0=20 + --file-prefix=3Dtest1 -- -i --crc-strip --eth-peer=3D0,00:11:22:33:44:1= 2 + testpmd>set fwd mac testpmd>set promisc all off testpmd>start =20 - In VM2, run testpmd:: + In VF2, run testpmd:: =20 - ./testpmd -c 0x3 -n 4 -- -i=20 - testpmd>set mac fwd + ./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.1=20 + --file-prefix=3Dtest2 -- -i --crc-strip + testpmd>set fwd rxonly =20 testpmd>set promisc all off testpmd>start =20 =20 -4. Send 100 packets with VF1's MAC address and VLAN=3D1, check if VF2 can'= t get 100 packets since they are not in the same VLAN. +4. Send 100 packets with VF1's MAC address and VLAN=3D1, check if VF2 can'= t=20 + get 100 packets since they are not in the same VLAN. =20 5. Change the VLAN id of VF2:: =20 - ip link set ens259f0 vf 1 vlan 1 + ip link set ens785f0 vf 1 vlan 1 =20 -6. Send 100 packets with VF1's MAC address and VLAN=3D1, check if VF2 can = get 100 packets since they are in the same VLAN now. Check the packet conte= nt is not corrupted.=20 +6. Send 100 packets with VF1's MAC address and VLAN=3D1, check if VF2 can = get=20 + 100 packets since they are in the same VLAN now. Check the packet=20 + content is not corrupted:: =20 -Test Case3: VEB Switching Inter-VM PF-VF MAC switch + sendp([Ether(dst=3D"00:11:22:33:44:11")/Dot1Q(vlan=3D1)/IP() + /Raw('x'*40)],iface=3D"ens785f1") + + +Test Case3: VEB Switching Inter PF-VF MAC switch =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D =20 -Summary: DPDK PF, then create 1VF, assign VF1 to VM1, PF in the host runni= ng dpdk traffic, send traffic from PF to VF1, ensure PF->VF1(let VF1 in pro= misc mode); send traffic from VF1 to PF, ensure VF1->PF can work. +Summary: DPDK PF, then create 1VF, PF in the host running dpdk testpmd,=20 +send traffic from PF to VF1, ensure PF->VF1(let VF1 in promisc mode);=20 +send traffic from VF1 to PF,ensure VF1->PF can work. =20 Details: =20 -1. Start VM1 with VF1, see the prerequisite part.=20 +1. vf->pf + In host, launch testpmd:: =20 -3. In host, launch testpmd:: - - ./testpmd -c 0xc0000 -n 4 -- -i=20 - testpmd>set mac fwd - testpmd>set promisc all on + ./testpmd -c 0x3 -n 4 -- -i=20 + testpmd>set fwd rxonly + testpmd>set promisc all off testpmd>start =20 In VM1, run testpmd:: =20 - ./testpmd -c 0x3 -n 4 -- -i --eth-peer=3D0,pf_mac_addr (Note: this will= let VF1 forwards packets to PF) - testpmd>set mac fwd - testpmd>set promisc all on + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=3D0,pf_mac_addr + testpmd>set fwd txonly + testpmd>set promisc all off testpmd>start - =20 -4. Send 100 packets with VF1's MAC address, check if PF can get 100 packet= s, so VF1->PF is working. Check the packet content is not corrupted.=20 - -5. Remove "--eth-peer" in VM1 testpmd commands, then send 100 packets with= PF's MAC address, check if VF1 can get 100 packets, so PF->VF1 is working.= Check the packet content is not corrupted.=20 - - -Test Case4: VEB Switching Inter-VM PF-VF/VF-VF MAC switch Performance -=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Performance testing, repeat Testcase1(VF-VF) and Testcase3(PF-VF) to check= the performance at different sizes(64B--1518B and jumbo frame--3000B) with= 100% rate sending traffic. =20 -Test Case5: Floating VEB Inter-VM VF-VF -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D +2. pf->vf + In host, launch testpmd:: =20 -Summary: DPDK PF, then create 2VFs and 2VMs, assign one VF to one VM, say = VF1 in VM1, VF2 in VM2, and make PF link down(the cable can be pluged out).= VFs in VMs are running dpdk testpmd, send traffic to VF1, and set the pack= et's DEST MAC to VF2, check if VF2 can receive the packets. Check Inter-VM = VF-VF MAC switch when PF is link down as well as up. - -Details:=20 - -1. Start VM1 with VF1, VM2 with VF2, see the prerequisite part.=20 -2. In the host, run testpmd with floating parameters and make the link dow= n:: - - ./testpmc -c 0xc0000 -n 4 --floating -- -i - testpmd> port stop all - testpmd> show port info all - -3. In VM1, run testpmd:: - - ./testpmd -c 0x3 -n 4 -- -i --eth-peer=3D0,00:11:22:33:44:12 - testpmd>set mac fwd - testpmd>set promisc off all - testpmd>start - =20 - In VM2, run testpmd:: + ./testpmd -c 0x3 -n 4 -- -i --eth-peer=3D0,vf1_mac_addr + testpmd>set fwd txonly + testpmd>set promisc all off + testpmd>start =20 - ./testpmd -c 0x3 -n 4 -- -i=20 - testpmd>set mac fwd - testpmd>set promisc off all - testpmd>start + In VM1, run testpmd:: =20 - =20 -4. Send 100 packets to VF1's MAC address, check if VF2 can get 100 packets= . Check the packet content is not corrupted. Also check the PF's port stats= , and there should be no packets RX/TX at PF port.=20 + ./testpmd -c 0x3 -n 4 -- -i + testpmd>mac_addr add 0 vf1_mac_addr + testpmd>set fwd rxonly + testpmd>set promisc all off + testpmd>start =20 -5. In the host, run testpmd with floating parameters and keep the link up,= then do step3 and step4, PF should have no RX/TX packets even when link is= up:: +3. tester->vf =20 - ./testpmc -c 0xc0000 -n 4 --floating -- -i - testpmd> port start all - testpmd> show port info all - =20 +4. Send 100 packets with PF's MAC address from VF, check if PF can get +100 packets, so VF1->PF is working. Check the packet content is not corrup= ted.=20 =20 -Test Case6: Floating VEB Inter-VM VF traffic can't be out of NIC -=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -DPDK PF, then create 1VF, assign VF1 to VM1, send traffic from VF1 to outs= ide world, then check outside world will not see any traffic. - -Details:=20 - -1. Start VM1 with VF1, see the prerequisite part.=20 -2. In the host, run testpmd with floating parameters. - - ./testpmc -c 0xc0000 -n 4 --floating -- -i - -3. In VM1, run testpmd, :: - - ./testpmd -c 0x3 -n 4 -- -i --eth-peer=3D0,pf_mac_addr - testpmd>set fwd txonly - testpmd>start - =20 - -4. At PF side, check the port stats to see if there is any RX/TX packets, = and also check the traffic generator side(e.g: IXIA ports or another port c= onnected to the DUT port) to ensure no packets.=20 +5. Send 100 packets with VF's MAC address from PF, check if VF1 can get +100 packets, so PF->VF1 is working. Check the packet content is not corrup= ted.=20 =20 +6. Send 100 packets with VF's MAC address from tester, check if VF1 can=20 +get +100 packets, so tester->VF1 is working. Check the packet content is not=20 +corrupted. +=20 =20 -Test Case7: Floating VEB VF-VF Performance -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D +Test Case4: VEB Switching Inter-VM PF-VF/VF-VF MAC switch Performance=20 +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 -Testing VF-VF performance at different sizes(64B--1518B and jumbo frame--3= 000B) with 100% rate sending traffic. \ No newline at end of file +Performance testing, repeat Testcase1(VF-VF) and Testcase3(PF-VF) to=20 +check the performance at different sizes(64B--1518B and jumbo=20 +frame--3000B) with 100% rate sending traffic -- 2.5.0