From: sjiajiax <sunx.jiajia@intel.com>
To: dts@dpdk.org
Subject: [dts] [‘dts-v1’ 9/9] Add a suite to test SRIOV mirror with KVM
Date: Mon, 18 May 2015 13:07:26 +0800 [thread overview]
Message-ID: <1431925646-1314-10-git-send-email-sunx.jiajia@intel.com> (raw)
In-Reply-To: <1431925646-1314-1-git-send-email-sunx.jiajia@intel.com>
Signed-off-by: sjiajiax <sunx.jiajia@intel.com>
---
conf/sriov_kvm.cfg | 77 +++
test_plans/sriov_kvm_test_plan.rst | 756 +++++++++++++++++++++
tests/TestSuite_sriov_kvm.py | 1291 ++++++++++++++++++++++++++++++++++++
3 files changed, 2124 insertions(+)
create mode 100644 conf/sriov_kvm.cfg
create mode 100644 test_plans/sriov_kvm_test_plan.rst
create mode 100644 tests/TestSuite_sriov_kvm.py
diff --git a/conf/sriov_kvm.cfg b/conf/sriov_kvm.cfg
new file mode 100644
index 0000000..4be7b16
--- /dev/null
+++ b/conf/sriov_kvm.cfg
@@ -0,0 +1,77 @@
+# vm configuration for pmd sriov case
+[vm0]
+cpu =
+ model=host,number=4,cpupin=5 6 7 8;
+disk =
+ file=/home/image/vdisk01-sriov-fc20.img;
+login =
+ user=root,password=tester;
+net =
+ type=nic,opt_vlan=0;
+ type=user,opt_vlan=0;
+monitor =
+ port=;
+qga =
+ enable=yes;
+vnc =
+ displayNum=1;
+daemon =
+ enable=yes;
+
+[vm1]
+cpu =
+ model=host,number=4,cpupin=9 10 11 12;
+disk =
+ file=/home/image/vdisk02-sriov-fc20.img;
+login =
+ user=root,password=tester;
+net =
+ type=nic,opt_vlan=1;
+ type=user,opt_vlan=1;
+monitor =
+ port=;
+qga =
+ enable=yes;
+vnc =
+ displayNum=2;
+daemon =
+ enable=yes;
+
+[vm2]
+cpu =
+ model=host,number=4,cpupin=13 14 15 16;
+disk =
+ file=/home/image/vdisk03-ivshmem-fc20.img;
+login =
+ user=root,password=tester;
+net =
+ type=nic,opt_vlan=3;
+ type=tap,opt_vlan=3,opt_br=br0;
+monitor =
+ port=;
+qga =
+ enable=yes;
+vnc =
+ displayNum=3;
+daemon =
+ enable=yes;
+
+[vm3]
+cpu =
+ model=host,number=4,cpupin=17 18 19 20;
+disk =
+ file=/home/image/vdisk04-ivshmem-fc20.img;
+login =
+ user=root,password=tester;
+net =
+ type=nic,opt_vlan=4;
+ type=tap,opt_vlan=4,opt_br=br0;
+monitor =
+ port=;
+qga =
+ enable=yes;
+vnc =
+ displayNum=4;
+daemon =
+ enable=yes;
+
diff --git a/test_plans/sriov_kvm_test_plan.rst b/test_plans/sriov_kvm_test_plan.rst
new file mode 100644
index 0000000..52dd0ca
--- /dev/null
+++ b/test_plans/sriov_kvm_test_plan.rst
@@ -0,0 +1,756 @@
+.. Copyright (c) <2013>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+===============================
+SRIOV and InterVM Communication
+===============================
+
+Some applications such as pipelining of virtual appliances and traffic
+mirroring to virtual appliances require the high performance InterVM
+communications.
+
+The testpmd application is used to configure traffic mirroring, PF VM receive
+mode, PFUTA hash table and control traffic to a VF for inter-VM communication.
+
+The 82599 supports four separate mirroring rules, each associated with a
+destination pool. Each rule is programmed with one of the four mirroring types:
+
+1. Pool mirroring: reflect all the packets received to a pool from the network.
+2. Uplink port mirroring: reflect all the traffic received from the network.
+3. Downlink port mirroring: reflect all the traffic transmitted to the
+ network.
+4. VLAN mirroring: reflect all the traffic received from the network
+ in a set of given VLANs (either from the network or from local VMs).
+
+
+Prerequisites for all 2VMs cases/Mirror 2VMs cases
+==================================================
+
+Create two VF interface VF0 and VF1 from one PF interface and then attach them
+to VM0 and VM1. Suppose PF is 0000:08:00.0.Below are commands which can be
+used to generate 2VFs and make them in pci-stub modes.::
+
+ ./tools/pci_unbind.py --bind=igb_uio 0000:08:00.0
+ echo 2 > /sys/bus/pci/devices/0000\:08\:00.0/max_vfs
+ echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
+ echo 0000:08:10.0 >/sys/bus/pci/devices/0000\:08\:10.0/driver/unbind
+ echo 0000:08:10.2 >/sys/bus/pci/devices/0000\:08\:10.2/driver/unbind
+ echo 0000:08:10.0 >/sys/bus/pci/drivers/pci-stub/bind
+ echo 0000:08:10.0 >/sys/bus/pci/drivers/pci-stub/bind
+
+Start PF driver on the Host and skip the VFs.::
+
+ ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -b 0000:08:10.0 -b 0000:08:10.2 -- -i
+
+For VM0 start up command, you can refer to below command.::
+
+ qemu-system-x86_64 -name vm0 -enable-kvm -m 2048 -smp 4 -cpu host -drive file=/root/Downloads/vm0.img -net nic,macaddr=00:00:00:00:00:01 -net tap,script=/etc/qemu-ifup -device pci-assign,host=08:10.0 -vnc :1 --daemonize
+
+The /etc/qemu-ifup can be below script, need you to create first::
+
+ #!/bin/sh
+ set -x
+ switch=br0
+ if [ -n "$1" ];then
+ /usr/sbin/tunctl -u `whoami` -t $1
+ /sbin/ip link set $1 up
+ sleep 0.5s
+ /usr/sbin/brctl addif $switch $1
+ exit 0
+ else
+ echo "Error: no interface specified"
+ exit 1
+ fi
+
+Similar for VM1, please refer to below command for VM1::
+
+ qemu-system-x86_64 -name vm1 -enable-kvm -m 2048 -smp 4 -cpu host -drive file=/root/Downloads/vm1.img -net nic,macaddr=00:00:00:00:00:02 -net tap,script=/etc/qemu-ifup -device pci-assign,host=08:10.2 -vnc :4 -daemonize
+
+If you want to run all common 2VM cases, please run testpmd on VM0 and VM1 and
+start traffic forward on the VM hosts. Some specific prerequisites need to be
+set up in each case::
+
+ VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF0 testpmd-> set fwd rxonly
+ VF0 testpmd-> start
+
+ VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+
+Test Case1: InterVM communication test on 2VMs
+==============================================
+
+Set the VF0 destination mac address to VF1 mac address, packets send from VF0
+will be forwarded to VF1 and then send out::
+
+ VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF1 testpmd-> show port info 0
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+
+ VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+
+Send 10 packets with VF0 mac address and make sure the packets will be
+forwarded by VF1.
+
+Test Case2: Mirror Traffic between 2VMs with Pool mirroring
+===========================================================
+
+Set up common 2VM prerequisites.
+
+Add one mirror rule that will mirror VM0 income traffic to VM1::
+
+ PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x1 dst-pool 1 on
+
+Send 10 packets to VM0 and verify the packets has been mirrored to VM1 and
+forwarded the packet.
+
+After test need reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+
+
+Test Case3: Mirror Traffic between 2VMs with Uplink mirroring
+=============================================================
+
+Set up common 2VM prerequisites.
+
+Add one mirror rule that will mirror VM0 income traffic to VM1::
+
+ PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 1 on
+
+Send 10 packets to VM0 and verify the packets has been mirrored to VM1 and
+forwarded the packet.
+
+After test need reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+
+Test Case4: Mirror Traffic between 2VMs with Downlink mirroring
+===============================================================
+
+Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts::
+
+ VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+
+Add one mirror rule that will mirror VM0 outcome traffic to VM1::
+
+ PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 1 on
+
+Make sure VM1 in receive only mode, VM0 send 16 packets, and verify the VM0
+packets has been mirrored to VM1::
+
+ VF1 testpmd-> set fwd rxonly
+ VF1 testpmd-> start
+ VF0 testpmd-> start tx_first
+
+Note: don't let VF1 fwd packets since downlink mirror will mirror back the
+packets to received packets, which will be an infinite loop.
+
+After test need reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+
+Test Case5: Mirror Traffic between VMs with Vlan mirroring
+==========================================================
+
+Set up common 2VM prerequisites.
+
+Add rx vlan-id 0 on VF0, add one mirror rule that will mirror VM0 income
+traffic with specified vlan to VM1::
+
+ PF testpmd-> rx_vlan add 0 port 0 vf 0x1
+ PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 0 dst-pool 1 on
+
+Send 10 packets with vlan-id0/vm0 MAC to VM0 and verify the packets has been
+mirrored to VM1 and forwarded the packet.
+
+After test need reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+
+Test Case6: Mirror Traffic between 2VMs with Vlan & Pool mirroring
+==================================================================
+
+Set up common 2VM prerequisites.
+
+Add rx vlan-id 3 of VF1, and 2 mirror rules, one is VM0 income traffic to VM1,
+one is VM1 vlan income traffic to VM0::
+
+ PF testpmd-> rx_vlan add 3 port 0 vf 0x2
+ PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x1 dst-pool 1 on
+ PF testpmd-> set port 0 mirror-rule 1 vlan-mirror 3 dst-pool 0 on
+
+Send 2 flows one by one, first 10 packets with VM0 mac, and the second 100
+packets with VM1 vlan and mac, and verify the first 10 packets has been
+mirrored first to VM1, second 100 packets go to VM0 and the packets have been
+forwarded.
+
+After test need reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+
+Test Case7: Mirror Traffic between 2VMs with Uplink & Downlink mirroring
+========================================================================
+
+Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts::
+
+ VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Add 2 mirror rules that will mirror VM0 outcome and income traffic to VM1::
+
+ PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 1 on
+ PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 1 on
+
+Make sure VM1 in receive only mode, VM0 first send 16 packets, and verify the
+VM0 packets has been mirrored to VM1::
+
+ VF1 testpmd-> set fwd rxonly
+ VF1 testpmd-> start
+ VF0 testpmd-> start tx_first
+
+Note: don't let VF1 fwd packets since downlink mirror will mirror back the
+packets to received packets, which will be an infinite loop.
+
+Send 10 packets to VF0 with VF0 MAC from ixia, verify that all VF0 received
+packets and transmitted packets will mirror to VF1::
+
+ VF0 testpmd-> stop
+ VF0 testpmd-> start
+
+After test need reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+
+Test Case8: Mirror Traffic between 2VMs with Vlan & Pool & Uplink & Downlink mirroring
+======================================================================================
+
+Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts::
+
+ VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+
+Add rx vlan-id 0 on VF0 and add 4 mirror rules::
+
+ PF testpmd-> reset port 0 mirror-rule 1
+ PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 1 on
+ PF testpmd-> set port 0 mirror-rule 1 uplink-mirror dst-pool 1 on
+ PF testpmd-> rx_vlan add 0 port 0 vf 0x2
+ PF testpmd-> set port 0 mirror-rule 2 vlan-mirror 0 dst-pool 0 on
+ PF testpmd-> set port 0 mirror-rule 3 pool-mirror 0x1 dst-pool 1 on
+
+Make sure VM1 in receive only mode, VM0 first send 16 packets, and verify the
+VM0 packets has been mirrored to VM1, VF1, RX, 16packets (downlink mirror)::
+
+ VF1 testpmd-> set fwd rxonly
+ VF1 testpmd-> start
+ VF0 testpmd-> start tx_first
+
+Note: don't let VF1 fwd packets since downlink mirror will mirror back the
+packets to received packets, which will be an infinite loop.
+
+Send 1 packet to VF0 with VF0 MAC from ixia, check if VF0 RX 1 packet and TX 1
+packet, and VF1 has 2 packets mirror from VF0(uplink mirror/downlink/pool)::
+
+ VF0 testpmd-> stop
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+
+Send 1 packet with VM1 vlan id and mac, and verify that VF0 have 1 RX packet, 1
+TX packet, and VF1 have 2 packets(downlink mirror)::
+
+ VF0 testpmd-> stop
+ VF0 testpmd-> set fwd rxonly
+ VF0 testpmd-> start
+
+After test need reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+ PF testpmd-> reset port 0 mirror-rule 2
+ PF testpmd-> reset port 0 mirror-rule 3
+
+
+Test Case9: Add Multi exact MAC address on VF
+=============================================
+
+Add an exact destination mac address on VF0::
+
+ PF testpmd-> mac_addr add port 0 vf 0 00:11:22:33:44:55
+
+Send 10 packets with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will
+receive the packets.
+
+Add another exact destination mac address on VF0::
+
+ PF testpmd-> mac_addr add port 0 vf 0 00:55:44:33:22:11
+
+Send 10 packets with dst mac 00:55:44:33:22:11 to VF0 and make sure VF0 will
+receive the packets.
+
+After test need restart PF and VF for clear exact mac addresss, first quit VF,
+then quit PF.
+
+Test Case10: Enable/Disable one uta MAC address on VF
+=====================================================
+
+Enable PF promisc mode and enable VF0 accept uta packets::
+
+ PF testpmd-> set promisc 0 on
+ PF testpmd-> set port 0 vf 0 rxmode ROPE on
+
+Add an uta destination mac address on VF0::
+
+ PF testpmd-> set port 0 uta 00:11:22:33:44:55 on
+
+Send 10 packets with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will
+the packets.
+
+Disable PF promisc mode, repeat step3, check VF0 should not accept uta packets::
+
+ PF testpmd-> set promisc 0 off
+ PF testpmd-> set port 0 vf 0 rxmode ROPE off
+
+Test Case11: Add Multi uta MAC addresses on VF
+==============================================
+
+Add 2 uta destination mac address on VF0::
+
+ PF testpmd-> set port 0 uta 00:55:44:33:22:11 on
+ PF testpmd-> set port 0 uta 00:55:44:33:22:66 on
+
+Send 2 flows, first 10 packets with dst mac 00:55:44:33:22:11, another 100
+packets with dst mac 00:55:44:33:22:66 to VF0 and make sure VF0 will receive
+all the packets.
+
+Test Case12: Add/Remove uta MAC address on VF
+=============================================
+
+Add one uta destination mac address on VF0::
+
+ PF testpmd-> set port 0 uta 00:55:44:33:22:11 on
+
+Send 10 packets with dst mac 00:55:44:33:22:11 to VF0 and make sure VF0 will
+receive the packets.
+
+Remove the uta destination mac address on VF0::
+
+ PF testpmd-> set port 0 uta 00:55:44:33:22:11 off
+
+Send 10 packets with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will
+not receive the packets.
+
+Add an uta destination mac address on VF0 again::
+
+ PF testpmd-> set port 0 uta 00:11:22:33:44:55 on
+
+Send packet with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will
+receive again and forwarded the packet. This step is to make sure the on/off
+switch is working.
+
+Test Case13: Pause RX Queues
+============================
+
+Pause RX queue of VF0 then send 10 packets to VF0 and make sure VF0 will not
+receive the packets::
+
+ PF testpmd-> set port 0 vf 0 rx off
+
+Enable RX queue of VF0 then send 10 packets to VF0 and make sure VF0 will
+receive the packet::
+
+ PF testpmd-> set port 0 vf 0 rx on
+
+Repeat the off/on twice to check the switch capability, and ensure on/off can
+work stable.
+
+Test Case14: Pause TX Queues
+============================
+
+Pause TX queue of VF0 then send 10 packets to VF0 and make sure VF0 will not
+forward the packet::
+
+ PF testpmd-> set port 0 vf 0 tx off
+
+Enable RX queue of VF0 then send 10 packets to VF0 and make sure VF0 will
+forward the packet::
+
+ PF testpmd-> set port 0 vf 0 tx on
+
+Repeat the off/on twice to check the switch capability, and ensure on/off can
+work stable.
+
+Test Case15: Prevent Rx of Broadcast on VF
+==========================================
+
+Disable VF0 rx broadcast packets then send broadcast packet to VF0 and make
+sure VF0 will not receive the packet::
+
+ PF testpmd-> set port 0 vf 0 rxmode BAM off
+
+Enable VF0 rx broadcast packets then send broadcast packet to VF0 and make sure
+VF0 will receive and forward the packet::
+
+ PF testpmd-> set port 0 vf 0 rxmode BAM on
+
+Repeat the off/on twice to check the switch capability, and ensure on/off can
+work stable.
+
+Test Case16: Negative input to commands
+=======================================
+
+Input invalid commands on PF/VF to make sure the commands can't work::
+
+ 1. PF testpmd-> set port 0 vf 65 tx on
+ 2. PF testpmd-> set port 2 vf -1 tx off
+ 3. PF testpmd-> set port 0 vf 0 rx oneee
+ 4. PF testpmd-> set port 0 vf 0 rx offdd
+ 5. PF testpmd-> set port 0 vf 0 rx oneee
+ 6. PF testpmd-> set port 0 vf 64 rxmode BAM on
+ 7. PF testpmd-> set port 0 vf 64 rxmode BAM off
+ 8. PF testpmd-> set port 0 uta 00:11:22:33:44 on
+ 9. PF testpmd-> set port 7 uta 00:55:44:33:22:11 off
+ 10. PF testpmd-> set port 0 vf 34 rxmode ROPE on
+ 11. PF testpmd-> mac_addr add port 0 vf 65 00:55:44:33:22:11
+ 12. PF testpmd-> mac_addr add port 5 vf 0 00:55:44:88:22:11
+ 13. PF testpmd-> set port 0 mirror-rule 0 pool-mirror 65 dst-pool 1 on
+ 14. PF testpmd-> set port 0 mirror-rule 0xf uplink-mirror dst-pool 1 on
+ 15. PF testpmd-> set port 0 mirror-rule 2 vlan-mirror 9 dst-pool 1 on
+ 16. PF testpmd-> set port 0 mirror-rule 0 downlink-mirror 0xf dst-pool 2 off
+ 17. PF testpmd-> reset port 0 mirror-rule 4
+ 18. PF testpmd-> reset port 0xff mirror-rule 0
+
+Prerequisites for Scaling 4VFs per 1PF
+======================================
+
+Create 4VF interface VF0, VF1, VF2, VF3 from one PF interface and then attach
+them to VM0, VM1, VM2 and VM3.Start PF driver on the Host and skip the VF
+driver will has been already attached to VMs::
+
+ On PF ./tools/pci_unbind.py --bind=igb_uio 0000:08:00.0
+ echo 2 > /sys/bus/pci/devices/0000\:08\:00.0/max_vfs
+ ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -b 0000:08:10.0 -b 0000:08:10.2 -b 0000:08:10.4 -b 0000:08:10.6 -- -i
+
+If you want to run all common 4VM cases, please run testpmd on VM0, VM1, VM2
+and VM3 and start traffic forward on the VM hosts. Some specific prerequisites
+are set up in each case::
+
+ VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF2 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF3 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+
+Test Case17: Scaling Pool Mirror on 4VFs
+========================================
+
+Make sure prerequisites for Scaling 4VFs per 1PF is set up.
+
+Add one mirror rules that will mirror VM0/VM1/VM2 income traffic to VM3::
+
+ PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x7 dst-pool 3 on
+ VF0 testpmd-> set fwd rxonly
+ VF0 testpmd-> start
+ VF1 testpmd-> set fwd rxonly
+ VF1 testpmd-> start
+ VF2 testpmd-> set fwd rxonly
+ VF2 testpmd-> start
+ VF3 testpmd-> set fwd rxonly
+ VF3 testpmd-> start
+
+Send 3 flows to VM0/VM1/VM2, one with VM0 mac, one with VM1 mac, one with VM2
+mac, and verify the packets has been mirrored to VM3.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+
+Set another 2 mirror rules. VM0/VM1 income traffic mirror to VM2 and VM3::
+
+ PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x3 dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 1 pool-mirror 0x3 dst-pool 3 on
+
+Send 2 flows to VM0/VM1, one with VM0 mac, one with VM1 mac and verify the
+packets has been mirrored to VM2/VM3 and VM2/VM3 have forwarded these packets.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+
+Test Case18: Scaling Uplink Mirror on 4VFs
+==========================================
+
+Make sure prerequisites for Scaling 4VFs per 1PF is set up.
+
+Add one mirror rules that will mirror all income traffic to VM2 and VM3::
+
+ PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 1 uplink-mirror dst-pool 3 on
+ VF0 testpmd-> set fwd rxonly
+ VF0 testpmd-> start
+ VF1 testpmd-> set fwd rxonly
+ VF1 testpmd-> start
+ VF2 testpmd-> set fwd rxonly
+ VF2 testpmd-> start
+ VF3 testpmd-> set fwd rxonly
+ VF3 testpmd-> start
+
+Send 4 flows to VM0/VM1/VM2/VM3, one packet with VM0 mac, one packet with VM1
+mac, one packet with VM2 mac, and one packet with VM3 mac and verify the
+income packets has been mirrored to VM2 and VM3. Make sure VM2/VM3 will have 4
+packets.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+
+Test Case19: Scaling Downlink Mirror on 4VFs
+============================================
+
+Make sure prerequisites for scaling 4VFs per 1PF is set up.
+
+Add one mirror rules that will mirror all outcome traffic to VM2 and VM3::
+
+ PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 1 downlink-mirror dst-pool 3 on
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+ VF2 testpmd-> set fwd rxonly
+ VF2 testpmd-> start
+ VF3 testpmd-> set fwd rxonly
+ VF3 testpmd-> start
+
+Send 2 flows to VM0/VM1, one with VM0 mac, one with VM1 mac, and verify VM0/VM1
+will forward these packets. And verify the VM0/VM1 outcome packets have been
+mirrored to VM2 and VM3.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+
+Test Case20: Scaling Vlan Mirror on 4VFs
+========================================
+
+Make sure prerequisites for scaling 4VFs per 1PF is set up.
+
+Add 3 mirror rules that will mirror VM0/VM1/VM2 vlan income traffic to VM3::
+
+ PF testpmd-> rx_vlan add 1 port 0 vf 0x1
+ PF testpmd-> rx_vlan add 2 port 0 vf 0x2
+ PF testpmd-> rx_vlan add 3 port 0 vf 0x4
+ PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1,2,3 dst-pool 3 on
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+ VF2 testpmd-> set fwd mac
+ VF2 testpmd-> start
+ VF3 testpmd-> set fwd mac
+ VF3 testpmd-> start
+
+Send 3 flows to VM0/VM1/VM2, one with VM0 mac/vlanid, one with VM1 mac/vlanid,
+one with VM2 mac/vlanid,and verify the packets has been mirrored to VM3 and
+VM3 has forwards these packets.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+
+Set another 2 mirror rules. VM0/VM1 income traffic mirror to VM2 and VM3::
+
+ PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1 dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 1 vlan-mirror 2 dst-pool 3 on
+
+Send 2 flows to VM0/VM1, one with VM0 mac/vlanid, one with VM1 mac/vlanid and
+verify the packets has been mirrored to VM2 and VM3, then VM2 and VM3 have
+forwarded these packets.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+
+Test Case21: Scaling Vlan Mirror & Pool Mirror on 4VFs
+======================================================
+
+Make sure prerequisites for scaling 4VFs per 1PF is set up.
+
+Add 3 mirror rules that will mirror VM0/VM1 vlan income traffic to VM2, VM0/VM1
+pool will come to VM3::
+
+ PF testpmd-> rx_vlan add 1 port 0 vf 0x1
+ PF testpmd-> rx_vlan add 2 port 0 vf 0x2
+ PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1 dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 1 vlan-mirror 2 dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 2 pool-mirror 0x3 dst-pool 3 on
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+ VF2 testpmd-> set fwd mac
+ VF2 testpmd-> start
+ VF3 testpmd-> set fwd mac
+ VF3 testpmd-> start
+
+Send 2 flows to VM0/VM1, one with VM0 mac/vlanid, one with VM1 mac/vlanid, and
+verify the packets has been mirrored to VM2 and VM3, and VM2/VM3 have
+forwarded these packets.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+ PF testpmd-> reset port 0 mirror-rule 2
+
+Set 3 mirror rules. VM0/VM1 income traffic mirror to VM2, VM2 traffic will
+mirror to VM3::
+
+ PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1,2 dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 2 pool-mirror 0x2 dst-pool 3 on
+
+Send 2 flows to VM0/VM1, one with VM0 mac/vlanid, one with VM1 mac/vlanid and
+verify the packets has been mirrored to VM2, VM2 traffic will be mirrored to
+VM3, then VM2 and VM3 have forwarded these packets.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+ PF testpmd-> reset port 0 mirror-rule 2
+
+Test Case22: Scaling Uplink Mirror & Downlink Mirror on 4VFs
+============================================================
+
+Make sure prerequisites for scaling 4VFs per 1PF is set up.
+
+Add 2 mirror rules that will mirror all income traffic to VM2, all outcome
+traffic to VM3. Make sure VM2 and VM3 rxonly::
+
+ PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 1 downlink-mirror dst-pool 3 on
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+ VF2 testpmd-> set fwd rxonly
+ VF2 testpmd-> start
+ VF3 testpmd-> set fwd rxonly
+ VF3 testpmd-> start
+
+Send 2 flows to VM0/VM1, one with VM0 mac, one with VM1 mac and make sure
+VM0/VM1 will forward packets. Verify the income packets have been mirrored to
+VM2, the outcome packets has been mirrored to VM3.
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+
+Test Case23: Scaling Pool & Vlan & Uplink & Downlink Mirror on 4VFs
+===================================================================
+
+Make sure prerequisites for scaling 4VFs per 1PF is set up.
+
+Add mirror rules that VM0 vlan mirror to VM1, all income traffic mirror to VM2,
+all outcome traffic mirror to VM3, all VM1 traffic will mirror to VM0. Make
+sure VM2 and VM3 rxonly::
+
+ PF testpmd-> rx_vlan add 1 port 0 vf 0x1
+ PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1 dst-pool 1 on
+ PF testpmd-> set port 0 mirror-rule 1 pool-mirror 0x2 dst-pool 0 on
+ PF testpmd-> set port 0 mirror-rule 2 uplink-mirror dst-pool 2 on
+ PF testpmd-> set port 0 mirror-rule 3 downlink-mirror dst-pool 3 on
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+ VF2 testpmd-> set fwd rxonly
+ VF2 testpmd-> start
+ VF3 testpmd-> set fwd rxonly
+ VF3 testpmd-> start
+
+Send 10 packets to VM0 with VM0 mac/vlanid, verify that VM1 will be mirrored
+and packets will be forwarded, VM2 will have all income traffic mirrored, VM3
+will have all outcome traffic mirrored
+
+Send 10 packets to VM1 with VM1 mac, verify that VM0 will be mirrored and
+packets will be forwarded, VM2 will have all income traffic mirrored; VM3 will
+have all outcome traffic mirrored
+
+Reset mirror rule::
+
+ PF testpmd-> reset port 0 mirror-rule 0
+ PF testpmd-> reset port 0 mirror-rule 1
+ PF testpmd-> reset port 0 mirror-rule 2
+ PF testpmd-> reset port 0 mirror-rule 3
+
+Test Case24: Scaling InterVM communication on 4VFs
+==================================================
+
+Set the VF0 destination mac address to VF1 mac address, packets send from VF0
+will be forwarded to VF1 and then send out. Similar for VF2 and VF3::
+
+ VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF1 testpmd-> show port info 0
+ VF1 testpmd-> set fwd mac
+ VF1 testpmd-> start
+
+ VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i
+ VF0 testpmd-> set fwd mac
+ VF0 testpmd-> start
+
+ VF3 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
+ VF3 testpmd-> show port info 0
+ VF3 testpmd-> set fwd mac
+ VF3 testpmd-> start
+
+ VF2 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF3 mac" -i
+ VF2 testpmd-> set fwd mac
+ VF2 testpmd-> start
+
+Send 2 flows, one with VF0 mac address and make sure the packets will be
+forwarded by VF1, another with VF2 mac address and make sure the packets will
+be forwarded by VF3.
+
+
diff --git a/tests/TestSuite_sriov_kvm.py b/tests/TestSuite_sriov_kvm.py
new file mode 100644
index 0000000..8109840
--- /dev/null
+++ b/tests/TestSuite_sriov_kvm.py
@@ -0,0 +1,1291 @@
+# <COPYRIGHT_TAG>
+
+"""
+DPDK Test suite.
+
+
+Test userland 10Gb PMD.
+
+"""
+
+import re
+import pdb
+import time
+
+import dts
+from qemu_kvm import QEMUKvm
+from test_case import TestCase
+
+from pmd_output import PmdOutput
+
+FRAME_SIZE_64 = 64
+VM_CORES_MASK = 'all'
+
+
+class TestSriovKvm(TestCase):
+
+ def set_up_all(self):
+ # port_mirror_ref = {port_id: rule_id_list}
+ # rule_id should be integer, and should be increased based on
+ # the most rule_id when add a rule for a port successfully,
+ # case should not be operate it directly
+ # example:
+ # port_mirror_ref = {0: 1, 1: 3}
+ self.port_mirror_ref = {}
+ self.dut_ports = self.dut.get_ports(self.nic)
+ self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
+
+ self.vm0 = None
+ self.vm1 = None
+ self.vm2 = None
+ self.vm3 = None
+
+ def set_up(self):
+ self.setup_2vm_2pf_env_flag = 0
+
+ self.setup_2vm_2vf_env_flag = 0
+ self.setup_2vm_prerequisite_flag = 0
+
+ self.setup_4vm_4vf_env_flag = 0
+ self.setup_4vm_prerequisite_flag = 0
+
+ def get_stats(self, dut, portid, rx_tx):
+ """
+ Get packets number from port statistic
+ """
+
+ stats = dut.testpmd.get_pmd_stats(portid)
+
+ if rx_tx == "rx":
+ stats_result = [
+ stats['RX-packets'], stats['RX-missed'], stats['RX-bytes']]
+ elif rx_tx == "tx":
+ stats_result = [
+ stats['TX-packets'], stats['TX-errors'], stats['TX-bytes']]
+ else:
+ return None
+
+ return stats_result
+
+ def parse_ether_ip(self, dut, dut_ports, dest_port, **ether_ip):
+ """
+ dut: which you want to send packet to
+ dest_port: the port num must be the index of dut.get_ports()
+ ether_ip:
+ 'ether':
+ {
+ 'dest_mac':False
+ 'src_mac':"52:00:00:00:00:00"
+ }
+ 'dot1q':
+ {
+ 'vlan':1
+ }
+ 'ip':
+ {
+ 'dest_ip':"10.239.129.88"
+ 'src_ip':"10.239.129.65"
+ }
+ 'udp':
+ {
+ 'dest_port':53
+ 'src_port':53
+ }
+ """
+ ret_ether_ip = {}
+ ether = {}
+ dot1q = {}
+ ip = {}
+ udp = {}
+
+ try:
+ dut_dest_port = dut_ports[dest_port]
+ except Exception as e:
+ print e
+
+ tester_port = dut.ports_map[dut_dest_port]
+ if not ether_ip.get('ether'):
+ ether['dest_mac'] = dut.get_mac_address(dut_dest_port)
+ ether['src_mac'] = dut.tester.get_mac(tester_port)
+ else:
+ if not ether_ip['ether'].get('dest_mac'):
+ ether['dest_mac'] = dut.get_mac_address(dut_dest_port)
+ else:
+ ether['dest_mac'] = ether_ip['ether']['dest_mac']
+ if not ether_ip['ether'].get('src_mac'):
+ ether['src_mac'] = dut.tester.get_mac(tester_port)
+ else:
+ ether['src_mac'] = ether_ip["ether"]["src_mac"]
+
+ if not ether_ip.get('dot1q'):
+ pass
+ else:
+ if not ether_ip['dot1q'].get('vlan'):
+ dot1q['vlan'] = '1'
+ else:
+ dot1q['vlan'] = ether_ip['dot1q']['vlan']
+
+ if not ether_ip.get('ip'):
+ ip['dest_ip'] = "10.239.129.88"
+ ip['src_ip'] = "10.239.129.65"
+ else:
+ if not ether_ip['ip'].get('dest_ip'):
+ ip['dest_ip'] = "10.239.129.88"
+ else:
+ ip['dest_ip'] = ether_ip['ip']['dest_ip']
+ if not ether_ip['ip'].get('src_ip'):
+ ip['src_ip'] = "10.239.129.65"
+ else:
+ ip['src_ip'] = ether_ip['ip']['src_ip']
+
+ if not ether_ip.get('udp'):
+ udp['dest_port'] = 53
+ udp['src_port'] = 53
+ else:
+ if not ether_ip['udp'].get('dest_port'):
+ udp['dest_port'] = 53
+ else:
+ udp['dest_port'] = ether_ip['udp']['dest_port']
+ if not ether_ip['udp'].get('src_port'):
+ udp['src_port'] = 53
+ else:
+ udp['src_port'] = ether_ip['udp']['src_port']
+
+ ret_ether_ip['ether'] = ether
+ ret_ether_ip['dot1q'] = dot1q
+ ret_ether_ip['ip'] = ip
+ ret_ether_ip['udp'] = udp
+
+ return ret_ether_ip
+
+ def send_packet(self,
+ dut,
+ dut_ports,
+ dest_port,
+ src_port=False,
+ frame_size=FRAME_SIZE_64,
+ count=1,
+ invert_verify=False,
+ **ether_ip):
+ """
+ Send count packet to portid
+ dut: which you want to send packet to
+ dest_port: the port num must be the index of dut.get_ports()
+ count: 1 or 2 or 3 or ... or 'MANY'
+ if count is 'MANY', then set count=1000,
+ send packets during 5 seconds.
+ ether_ip:
+ 'ether':
+ {
+ 'dest_mac':False
+ 'src_mac':"52:00:00:00:00:00"
+ }
+ 'dot1q':
+ {
+ 'vlan':1
+ }
+ 'ip':
+ {
+ 'dest_ip':"10.239.129.88"
+ 'src_ip':"10.239.129.65"
+ }
+ 'udp':
+ {
+ 'dest_port':53
+ 'src_port':53
+ }
+ """
+ during = 0
+ loop = 0
+ try:
+ count = int(count)
+ except ValueError as e:
+ if count == 'MANY':
+ during = 20
+ count = 1000 * 10
+ else:
+ raise e
+
+ gp0rx_pkts, gp0rx_err, gp0rx_bytes = [int(_)
+ for _ in self.get_stats(dut, dest_port, "rx")]
+ if not src_port:
+ itf = self.tester.get_interface(
+ dut.ports_map[dut_ports[dest_port]])
+ else:
+ itf = src_port
+
+ ret_ether_ip = self.parse_ether_ip(
+ dut,
+ dut_ports,
+ dest_port,
+ **ether_ip)
+
+ pktlen = frame_size - 18
+ padding = pktlen - 20
+
+ start = time.time()
+ while True:
+ self.tester.scapy_foreground()
+ self.tester.scapy_append(
+ 'nutmac="%s"' % ret_ether_ip['ether']['dest_mac'])
+ self.tester.scapy_append(
+ 'srcmac="%s"' % ret_ether_ip['ether']['src_mac'])
+
+ if ether_ip.get('dot1q'):
+ self.tester.scapy_append(
+ 'vlanvalue=%d' % int(ret_ether_ip['dot1q']['vlan']))
+ self.tester.scapy_append(
+ 'destip="%s"' % ret_ether_ip['ip']['dest_ip'])
+ self.tester.scapy_append(
+ 'srcip="%s"' % ret_ether_ip['ip']['src_ip'])
+ self.tester.scapy_append(
+ 'destport=%d' % ret_ether_ip['udp']['dest_port'])
+ self.tester.scapy_append(
+ 'srcport=%d' % ret_ether_ip['udp']['src_port'])
+ if not ret_ether_ip.get('dot1q'):
+ send_cmd = 'sendp([Ether(dst=nutmac, src=srcmac)/' + \
+ 'IP(dst=destip, src=srcip, len=%s)/' % pktlen + \
+ 'UDP(sport=srcport, dport=destport)/' + \
+ 'Raw(load="\x50"*%s)], ' % padding + \
+ 'iface="%s", count=%d)' % (itf, count)
+ else:
+ send_cmd = 'sendp([Ether(dst=nutmac, src=srcmac)/Dot1Q(vlan=vlanvalue)/' + \
+ 'IP(dst=destip, src=srcip, len=%s)/' % pktlen + \
+ 'UDP(sport=srcport, dport=destport)/' + \
+ 'Raw(load="\x50"*%s)], iface="%s", count=%d)' % (
+ padding, itf, count)
+ self.tester.scapy_append(send_cmd)
+
+ self.tester.scapy_execute()
+ loop += 1
+
+ now = time.time()
+ if (now - start) >= during:
+ break
+ time.sleep(.5)
+
+ p0rx_pkts, p0rx_err, p0rx_bytes = [int(_)
+ for _ in self.get_stats(dut, dest_port, "rx")]
+
+ p0rx_pkts -= gp0rx_pkts
+ p0rx_bytes -= gp0rx_bytes
+
+ if not invert_verify:
+ self.verify(p0rx_pkts >= count * loop,
+ "Data not received by port")
+ else:
+ self.verify(p0rx_pkts == 0 or
+ p0rx_pkts < count * loop,
+ "Data received by port, but should not.")
+ return count * loop
+
+ def setup_2vm_2pf_env(self):
+ p0 = self.dut_ports[0]
+ p1 = self.dut_ports[1]
+
+ self.port0 = self.dut.ports_info[p0]['port']
+ self.port0.unbind_driver()
+ self.port0_pci = self.dut.ports_info[p0]['pci']
+
+ self.port1 = self.dut.ports_info[p1]['port']
+ self.port1.unbind_driver()
+ self.port1_pci = self.dut.ports_info[p1]['pci']
+
+ vf0_prop = {'prop_host': self.port0_pci}
+ vf1_prop = {'prop_host': self.port1_pci}
+
+ # set up VM0 ENV
+ self.vm0 = QEMUKvm(self.dut, 'vm0', 'sriov_kvm')
+ self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
+ self.vm_dut_0 = self.vm0.start()
+
+ # set up VM1 ENV
+ self.vm1 = QEMUKvm(self.dut, 'vm1', 'sriov_kvm')
+ self.vm1.set_vm_device(driver='pci-assign', **vf1_prop)
+ self.vm_dut_1 = self.vm1.start()
+
+ self.setup_2vm_2vf_env_flag = 1
+
+ def destroy_2vm_2pf_env(self):
+ self.vm_dut_0.close()
+ self.vm_dut_0.logger.logger_exit()
+ self.vm0.stop()
+ self.port0.bind_driver('igb_uio')
+ self.vm0 = None
+
+ self.vm_dut_1.close()
+ self.vm_dut_1.logger.logger_exit()
+ self.vm1.stop()
+ self.port1.bind_driver('igb_uio')
+ self.vm1 = None
+
+ self.setup_2vm_2vf_env_flag = 0
+
+ def setup_2vm_2vf_env(self, driver='igb_uio'):
+ self.used_dut_port = self.dut_ports[0]
+
+ self.dut.generate_sriov_vfs_by_port(
+ self.used_dut_port, 2, driver=driver)
+ self.sriov_vfs_port = self.dut.ports_info[
+ self.used_dut_port]['vfs_port']
+
+ try:
+
+ for port in self.sriov_vfs_port:
+ port.bind_driver('pci-stub')
+
+ time.sleep(1)
+
+ vf0_prop = {'prop_host': self.sriov_vfs_port[0].pci}
+ vf1_prop = {'prop_host': self.sriov_vfs_port[1].pci}
+
+ for port_id in self.dut_ports:
+ if port_id == self.used_dut_port:
+ continue
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver()
+
+ if driver == 'igb_uio':
+ # start testpmd with the two VFs on the host
+ self.host_testpmd = PmdOutput(self.dut)
+ eal_param = '-b %(vf0)s -b %(vf1)s' % {'vf0': self.sriov_vfs_port[0].pci,
+ 'vf1': self.sriov_vfs_port[1].pci}
+ self.host_testpmd.start_testpmd(
+ "1S/2C/2T", eal_param=eal_param)
+
+ # set up VM0 ENV
+ self.vm0 = QEMUKvm(self.dut, 'vm0', 'sriov_kvm')
+ self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
+ self.vm_dut_0 = self.vm0.start()
+ if self.vm_dut_0 is None:
+ raise Exception("Set up VM0 ENV failed!")
+
+ # set up VM1 ENV
+ self.vm1 = QEMUKvm(self.dut, 'vm1', 'sriov_kvm')
+ self.vm1.set_vm_device(driver='pci-assign', **vf1_prop)
+ self.vm_dut_1 = self.vm1.start()
+ if self.vm_dut_1 is None:
+ raise Exception("Set up VM1 ENV failed!")
+
+ self.setup_2vm_2vf_env_flag = 1
+ except Exception as e:
+ self.destroy_2vm_2vf_env()
+ raise Exception(e)
+
+ def destroy_2vm_2vf_env(self):
+ if getattr(self, 'vm_dut_0', None):
+ self.vm_dut_0.close()
+ self.vm_dut_0.logger.logger_exit()
+ if getattr(self, 'vm0', None):
+ self.vm0.stop()
+ self.vm0 = None
+
+ if getattr(self, 'vm_dut_1', None):
+ self.vm_dut_1.close()
+ self.vm_dut_1.logger.logger_exit()
+ if getattr(self, 'vm1', None):
+ self.vm1.stop()
+ self.vm1 = None
+
+ if getattr(self, 'host_testpmd', None):
+ self.host_testpmd.execute_cmd('quit', '# ')
+ self.host_testpmd = None
+
+ if getattr(self, 'used_dut_port', None):
+ self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
+ port = self.dut.ports_info[self.used_dut_port]['port']
+ port.bind_driver('igb_uio')
+ self.used_dut_port = None
+
+ for port_id in self.dut_ports:
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver('igb_uio')
+
+ self.setup_2vm_2vf_env_flag = 0
+
+ def setup_4vm_4vf_env(self, driver='igb_uio'):
+ self.used_dut_port = self.dut_ports[0]
+
+ self.dut.generate_sriov_vfs_by_port(
+ self.used_dut_port, 4, driver=driver)
+ self.sriov_vfs_port = self.dut.ports_info[self.used_dut_port]['port']
+
+ try:
+ for port in self.sriov_vfs_port:
+ port.bind_driver('pci-stub')
+
+ time.sleep(1)
+
+ vf0_prop = {'prop_host': self.sriov_vfs_port[0].pci}
+ vf1_prop = {'prop_host': self.sriov_vfs_port[1].pci}
+ vf2_prop = {'prop_host': self.sriov_vfs_port[2].pci}
+ vf3_prop = {'prop_host': self.sriov_vfs_port[3].pci}
+
+ for port_id in self.dut_ports:
+ if port_id == self.used_dut_port:
+ continue
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver()
+
+ if driver == 'igb_uio':
+ # start testpmd with the four VFs on the host
+ self.host_testpmd = PmdOutput(self.dut)
+ eal_param = '-b %(vf0) -b %(vf1)s -b %(vf2)s -b %(vf3)s' % \
+ {'vf0': self.sriov_vfs_pci[0],
+ 'vf1': self.sriov_vfs_pci[1],
+ 'vf2': self.sriov_vfs_pci[2],
+ 'vf3': self.sriov_vfs_pci[3]}
+ self.host_testpmd.start_testpmd(
+ "1S/2C/2T", eal_param=eal_param)
+
+ self.vm0 = QEMUKvm(self.dut, 'vm0', 'sriov_kvm')
+ self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
+ self.vm_dut_0 = self.vm0.start()
+ if self.vm_dut_0 is None:
+ raise Exception("Set up VM0 ENV failed!")
+
+ self.vm1 = QEMUKvm(self.dut, 'vm1', 'sriov_kvm')
+ self.vm1.set_vm_device(driver='pci-assign', **vf1_prop)
+ self.vm_dut_1 = self.vm1.start()
+ if self.vm_dut_1 is None:
+ raise Exception("Set up VM1 ENV failed!")
+
+ self.vm2 = QEMUKvm(self.dut, 'vm2', 'sriov_kvm')
+ self.vm2.set_vm_device(driver='pci-assign', **vf2_prop)
+ self.vm_dut_2 = self.vm1.start()
+ if self.vm_dut_2 is None:
+ raise Exception("Set up VM2 ENV failed!")
+
+ self.vm3 = QEMUKvm(self.dut, 'vm3', 'sriov_kvm')
+ self.vm3.set_vm_device(driver='pci-assign', **vf3_prop)
+ self.vm_dut_3 = self.vm3.start()
+ if self.vm_dut_3 is None:
+ raise Exception("Set up VM3 ENV failed!")
+
+ self.setup_4vm_4vf_env_flag = 1
+ except Exception as e:
+ self.destroy_4vm_4vf_env()
+ raise Exception(e)
+
+ def destroy_4vm_4vf_env(self):
+ if getattr(self, 'vm_dut_0', None):
+ self.vm_dut_0.close()
+ self.vm_dut_0.logger.logger_exit()
+ if getattr(self, 'vm0', None):
+ self.vm0.stop()
+ self.vm0 = None
+
+ if getattr(self, 'vm_dut_1', None):
+ self.vm_dut_1.close()
+ self.vm_dut_1.logger.logger_exit()
+ if getattr(self, 'vm1', None):
+ self.vm1.stop()
+ self.vm1 = None
+
+ if getattr(self, 'vm_dut_2', None):
+ self.vm_dut_2.close()
+ self.vm_dut_2.logger.logger_exit()
+ if getattr(self, 'vm2', None):
+ self.vm2.stop()
+ self.vm2 = None
+
+ if getattr(self, 'vm_dut_3', None):
+ self.vm_dut_3.close()
+ self.vm_dut_3.logger.logger_exit()
+ if getattr(slef, 'vm3', None):
+ self.vm3.stop()
+ self.vm3 = None
+
+ if getattr(self, 'host_testpmd', None):
+ self.host_testpmd.execute_cmd('stop')
+ self.host_testpmd.execute_cmd('quit', '# ')
+ self.host_testpmd = None
+
+ if getattr(self, 'used_dut_port', None):
+ self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
+ port = self.ports_info[self.used_dut_port]['port']
+ port.bind_driver('igb_uio')
+ slef.used_dut_port = None
+
+ for port_id in self.dut_ports:
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver('igb_uio')
+
+ self.setup_4vm_4vf_env_flag = 0
+
+ def transform_integer(self, value):
+ try:
+ value = int(value)
+ except ValueError as e:
+ raise Exception("Value not integer,but is " + type(value))
+ return value
+
+ def make_port_new_ruleid(self, port):
+ port = self.transform_integer(port)
+ if port not in self.port_mirror_ref.keys():
+ max_rule_id = 0
+ else:
+ rule_ids = sorted(self.port_mirror_ref[port])
+ if rule_ids:
+ max_rule_id = rule_ids[-1] + 1
+ else:
+ max_rule_id = 0
+ return max_rule_id
+
+ def add_port_ruleid(self, port, rule_id):
+ port = self.transform_integer(port)
+ rule_id = self.transform_integer(rule_id)
+
+ if port not in self.port_mirror_ref.keys():
+ self.port_mirror_ref[port] = [rule_id]
+ else:
+ self.verify(rule_id not in self.port_mirror_ref[port],
+ "Rule id [%d] has been repeated, please check!" % rule_id)
+ self.port_mirror_ref[port].append(rule_id)
+
+ def remove_port_ruleid(self, port, rule_id):
+ port = self.transform_integer(port)
+ rule_id = self.transform_integer(rule_id)
+ if port not in self.port_mirror_ref.keys():
+ pass
+ else:
+ if rule_id not in self.port_mirror_ref[port]:
+ pass
+ else:
+ self.port_mirror_ref[port].remove(rule_id)
+ if not self.port_mirror_ref[port]:
+ self.port_mirror_ref.pop(port)
+
+ def set_port_mirror_rule(self, port, mirror_name, rule_detail):
+ """
+ Set the mirror rule for specified port.
+ """
+ port = self.transform_integer(port)
+
+ rule_id = self.make_port_new_ruleid(port)
+
+ mirror_rule_cmd = "set port %d mirror-rule %d %s %s" % \
+ (port, rule_id, mirror_name, rule_detail)
+ out = self.dut.send_expect("%s" % mirror_rule_cmd, "testpmd> ")
+ self.verify('Bad arguments' not in out, "Set port %d %s failed!" %
+ (port, mirror_name))
+
+ self.add_port_ruleid(port, rule_id)
+ return rule_id
+
+ def set_port_pool_mirror(self, port, pool_mirror_rule):
+ """
+ Set the pool mirror for specified port.
+ """
+ return self.set_port_mirror_rule(port, 'pool-mirror-up', pool_mirror_rule)
+
+ def set_port_vlan_mirror(self, port, vlan_mirror_rule):
+ """
+ Set the vlan mirror for specified port.
+ """
+ return self.set_port_mirror_rule(port, 'vlan-mirror', vlan_mirror_rule)
+
+ def set_port_uplink_mirror(self, port, uplink_mirror_rule):
+ """
+ Set the uplink mirror for specified port.
+ """
+ return self.set_port_mirror_rule(port, 'uplink-mirror', uplink_mirror_rule)
+
+ def set_port_downlink_mirror(self, port, downlink_mirror_rule):
+ """
+ Set the downlink mirror for specified port.
+ """
+ return self.set_port_mirror_rule(port, 'downlink-mirror', downlink_mirror_rule)
+
+ def reset_port_mirror_rule(self, port, rule_id):
+ """
+ Reset the pool mirror for specified port.
+ """
+ port = self.transform_integer(port)
+ rule_id = self.transform_integer(rule_id)
+
+ mirror_rule_cmd = "reset port %d mirror-rule %d" % (port, rule_id)
+ out = self.dut.send_expect("%s" % mirror_rule_cmd, "testpmd> ")
+ self.verify("Bad arguments" not in out,
+ "Reset port %d mirror rule failed!")
+
+ self.remove_port_ruleid(port, rule_id)
+
+ def reset_port_all_mirror_rule(self, port):
+ """
+ Reset all mirror rules of specified port.
+ """
+ port = self.transform_integer(port)
+
+ if port not in self.port_mirror_ref.keys():
+ pass
+ else:
+ for rule_id in self.port_mirror_ref[port]:
+ self.reset_port_mirror_rule(port, rule_id)
+
+ def setup_two_vm_common_prerequisite(self):
+ self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
+ self.vm0_testpmd = PmdOutput(self.vm_dut_0)
+ self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
+ self.vm0_testpmd.execute_cmd('set fwd rxonly')
+ self.vm0_testpmd.execute_cmd('start')
+
+ self.vm1_dut_ports = self.vm_dut_1.get_ports('any')
+ self.vm1_testpmd = PmdOutput(self.vm_dut_1)
+ self.vm1_testpmd.start_testpmd(VM_CORES_MASK)
+ self.vm1_testpmd.execute_cmd('set fwd mac')
+ self.vm1_testpmd.execute_cmd('start')
+
+ self.setup_2vm_prerequisite_flag = 1
+
+ def destroy_two_vm_common_prerequisite(self):
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm0_testpmd.execute_cmd('quit', '# ')
+ self.vm0_testpmd = None
+ self.vm0_dut_ports = None
+
+ self.vm1_testpmd.execute_cmd('stop')
+ self.vm1_testpmd.execute_cmd('quit', '# ')
+ self.vm0_testpmd = None
+ self.vm1_dut_ports = None
+
+ self.setup_2vm_prerequisite_flag = 0
+
+ def stop_test_setup_two_vm_pf_env(self):
+ self.setup_2vm_2pf_env()
+
+ out = self.vm_dut_0.send_expect("ifconfig", '# ')
+ print out
+ out = self.vm_dut_0.send_expect("lspci -nn | grep -i eth", '# ')
+ print out
+
+ out = self.vm_dut_1.send_expect("ifconfig", '# ')
+ print out
+ out = self.vm_dut_1.send_expect("lspci -nn | grep -i eth", '# ')
+ print out
+
+ self.destroy_2vm_2pf_env()
+
+ def test_two_vms_intervm_communication(self):
+ self.setup_2vm_2vf_env()
+
+ self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
+ self.vm1_dut_ports = self.vm_dut_1.get_ports('any')
+ port_id_0 = 0
+ packet_num = 10
+
+ self.vm1_testpmd = PmdOutput(self.vm_dut_1)
+ self.vm1_testpmd.start_testpmd(VM_CORES_MASK)
+ vf1_mac = self.vm1_testpmd.get_port_mac(port_id_0)
+ self.vm1_testpmd.execute_cmd('set fwd mac')
+ self.vm1_testpmd.execute_cmd('start')
+
+ self.vm0_testpmd = PmdOutput(self.vm_dut_0)
+ self.vm0_testpmd.start_testpmd(
+ VM_CORES_MASK, "--eth-peer=0,%s" % vf1_mac)
+ vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
+ self.vm0_testpmd.execute_cmd('set fwd mac')
+ self.vm0_testpmd.execute_cmd('start')
+
+ self.setup_2vm_prerequisite_flag = 1
+ time.sleep(2)
+
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_0, self.vm0_dut_ports, port_id_0, count=packet_num)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ self.verify(
+ vm1_end_stats["TX-packets"] - vm1_start_stats["TX-packets"] == packet_num,
+ "VM1 transmit packets failed when sending packets to VM0")
+
+ def calculate_stats(self, start_stats, end_stats):
+ ret_stats = {}
+ for key in start_stats.keys():
+ try:
+ start_stats[key] = int(start_stats[key])
+ end_stats[key] = int(end_stats[key])
+ except TypeError:
+ ret_stats[key] = end_stats[key]
+ continue
+ ret_stats[key] = end_stats[key] - start_stats[key]
+ return ret_stats
+
+ def test_two_vms_pool_mirror(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ packet_num = 10
+
+ rule_id = self.set_port_pool_mirror(port_id_0, '0x1 dst-pool 1 on')
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_0, self.vm0_dut_ports, port_id_0, count=packet_num)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == packet_num and
+ vm1_ret_stats['TX-packets'] == packet_num,
+ "Pool mirror failed between VM0 and VM1!")
+
+ self.reset_port_mirror_rule(port_id_0, rule_id)
+
+ def test_two_vms_uplink_mirror(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ packet_num = 10
+
+ rule_id = self.set_port_uplink_mirror(port_id_0, 'dst-pool 1 on')
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_0, self.vm0_dut_ports, port_id_0, count=packet_num)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == packet_num and
+ vm1_ret_stats['TX-packets'] == packet_num,
+ "Uplink mirror failed between VM0 and VM1!")
+
+ self.reset_port_mirror_rule(port_id_0, rule_id)
+
+ def test_two_vms_downlink_mirror(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm1_testpmd.execute_cmd('stop')
+
+ port_id_0 = 0
+
+ rule_id = self.set_port_downlink_mirror(port_id_0, 'dst-pool 1 on')
+
+ self.vm1_testpmd.execute_cmd('set fwd rxonly')
+ self.vm1_testpmd.execute_cmd('start')
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ self.vm0_testpmd.execute_cmd('start tx_first')
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == vm0_ret_stats['TX-packets'],
+ "Downlink mirror failed between VM0 and VM1!")
+
+ self.reset_port_mirror_rule(port_id_0, rule_id)
+
+ def test_two_vms_vlan_mirror(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ vlan_id = 0
+ vf_mask = '0x1'
+ packet_num = 10
+
+ self.host_testpmd.execute_cmd(
+ 'rx_vlan add %d port %d vf %s' % (vlan_id, port_id_0, vf_mask))
+ rule_id = self.set_port_vlan_mirror(port_id_0, '0 dst-pool 1 on')
+
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['dot1q'] = {'vlan': '%d' % vlan_id}
+ self.send_packet(
+ self.vm_dut_0,
+ self.vm0_dut_ports,
+ port_id_0,
+ count=packet_num,
+ **ether_ip)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == packet_num and
+ vm1_ret_stats['TX-packets'] == packet_num,
+ "Vlan mirror failed between VM0 and VM1!")
+
+ self.reset_port_mirror_rule(port_id_0, rule_id)
+
+ def test_two_vms_vlan_and_pool_mirror(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ vlan_id = 3
+ vf_mask = '0x2'
+ packet_num = 10
+
+ self.host_testpmd.execute_cmd(
+ 'rx_vlan add %d port %d vf %s' % (vlan_id, port_id_0, vf_mask))
+ self.set_port_pool_mirror(port_id_0, '0x1 dst-pool 1 on')
+ self.set_port_vlan_mirror(port_id_0, '%d dst-pool 0 on' % vlan_id)
+
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_0,
+ self.vm0_dut_ports,
+ port_id_0,
+ count=packet_num)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == packet_num and
+ vm1_ret_stats['TX-packets'] == packet_num,
+ "Pool mirror failed between VM0 and VM1 when set vlan and pool mirror!")
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['dot1q'] = {'vlan': '%d' % vlan_id}
+ self.send_packet(
+ self.vm_dut_1,
+ self.vm1_dut_ports,
+ port_id_0,
+ count=10 *
+ packet_num,
+ **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+
+ self.verify(vm0_ret_stats['RX-packets'] == 10 * packet_num,
+ "Vlan mirror failed between VM0 and VM1 when set vlan and pool mirror!")
+
+ self.reset_port_all_mirror_rule(port_id_0)
+
+ def test_two_vms_uplink_and_downlink_mirror(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm1_testpmd.execute_cmd('stop')
+
+ port_id_0 = 0
+ packet_num = 10
+
+ self.set_port_downlink_mirror(port_id_0, 'dst-pool 1 on')
+ self.set_port_uplink_mirror(port_id_0, 'dst-pool 1 on')
+
+ self.vm1_testpmd.execute_cmd('set fwd rxonly')
+ self.vm1_testpmd.execute_cmd('start')
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ self.vm0_testpmd.execute_cmd('start tx_first')
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == vm0_ret_stats['TX-packets'],
+ "Downlink mirror failed between VM0 and VM1 " +
+ "when set uplink and downlink mirror!")
+
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm0_testpmd.execute_cmd('set fwd mac')
+ self.vm0_testpmd.execute_cmd('start')
+
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_0,
+ self.vm0_dut_ports,
+ port_id_0,
+ count=packet_num)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == 2 * packet_num,
+ "Uplink and down link mirror failed between VM0 and VM1 " +
+ "when set uplink and downlink mirror!")
+
+ self.reset_port_all_mirror_rule(port_id_0)
+
+ def test_two_vms_vlan_and_pool_and_uplink_and_downlink(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm1_testpmd.execute_cmd('stop')
+
+ port_id_0 = 0
+ vlan_id = 3
+ vf_mask = '0x2'
+ packet_num = 1
+
+ self.set_port_downlink_mirror(port_id_0, 'dst-pool 1 on')
+ self.set_port_uplink_mirror(port_id_0, 'dst-pool 1 on')
+ self.host_testpmd.execute_cmd("rx_vlan add %d port %d vf %s" %
+ (vlan_id, port_id_0, vf_mask))
+ self.set_port_vlan_mirror(port_id_0, '%d dst-pool 0 on' % vlan_id)
+ self.set_port_pool_mirror(port_id_0, '0x1 dst-pool 1 on')
+
+ self.vm1_testpmd.execute_cmd('set fwd rxonly')
+ self.vm1_testpmd.execute_cmd('start')
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ self.vm0_testpmd.execute_cmd('start tx_first')
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm1_ret_stats['RX-packets'] == vm0_ret_stats['TX-packets'],
+ "Downlink mirror failed between VM0 and VM1 " +
+ "when set vlan, pool, uplink and downlink mirror!")
+
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm0_testpmd.execute_cmd('set fwd mac')
+ self.vm0_testpmd.execute_cmd('start')
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_0,
+ self.vm0_dut_ports,
+ port_id_0,
+ count=packet_num)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num and
+ vm0_ret_stats['TX-packets'] == packet_num and
+ vm1_ret_stats['RX-packets'] == 2 * packet_num,
+ "Uplink and downlink mirror failed between VM0 and VM1 " +
+ "when set vlan, pool, uplink and downlink mirror!")
+
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm0_testpmd.execute_cmd('set fwd mac')
+ self.vm0_testpmd.execute_cmd('start')
+
+ ether_ip = {}
+ ether_ip['dot1q'] = {'vlan': '%d' % vlan_id}
+ vm1_start_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_1,
+ self.vm1_dut_ports,
+ port_id_0,
+ count=packet_num,
+ **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ vm1_end_stats = self.vm1_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+ vm1_ret_stats = self.calculate_stats(vm1_start_stats, vm1_end_stats)
+
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num and
+ vm0_ret_stats['TX-packets'] == packet_num and
+ vm1_ret_stats['RX-packets'] == 2 * packet_num,
+ "Vlan and downlink mirror failed between VM0 and VM1 " +
+ "when set vlan, pool, uplink and downlink mirror!")
+
+ self.reset_port_all_mirror_rule(port_id_0)
+
+ def test_two_vms_add_multi_exact_mac_on_vf(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ vf_num = 0
+ packet_num = 10
+
+ for vf_mac in ["00:11:22:33:44:55", "00:55:44:33:22:11"]:
+ self.host_testpmd.execute_cmd("mac_addr add port %d vf %d %s" %
+ (port_id_0, vf_num, vf_mac))
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['ether'] = {'dest_mac': '%s' % vf_mac}
+ self.send_packet(
+ self.vm_dut_0,
+ self.vm0_dut_ports,
+ port_id_0,
+ count=packet_num,
+ **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(
+ vm0_start_stats, vm0_end_stats)
+
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num,
+ "Add exact MAC %s failed btween VF0 and VF1" % vf_mac +
+ "when add multi exact MAC address on VF!")
+
+ def test_two_vms_enalbe_or_disable_one_uta_mac_on_vf(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ vf_mac = "00:11:22:33:44:55"
+ packet_num = 10
+
+ self.host_testpmd.execute_cmd('set promisc %d on' % port_id_0)
+ self.host_testpmd.execute_cmd(
+ 'set port %d vf 0 rxmode ROPE on' % port_id_0)
+ self.host_testpmd.execute_cmd(
+ 'set port %d vf 1 rxmode ROPE off' % port_id_0)
+ self.host_testpmd.execute_cmd(
+ 'set port %d uta %s on' % (port_id_0, vf_mac))
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['ether'] = {'dest_mac': '%s' % vf_mac}
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports, port_id_0,
+ count=packet_num, **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num,
+ "Enable one uta MAC failed between VM0 and VM1 " +
+ "when enable or disable one uta MAC address on VF!")
+
+ self.host_testpmd.execute_cmd('set promisc %d off' % port_id_0)
+ self.host_testpmd.execute_cmd(
+ 'set port %d vf 0 rxmode ROPE off' % port_id_0)
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['ether'] = {'dest_mac': '%s' % vf_mac}
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports, port_id_0,
+ count=packet_num, invert_verify=True, **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(vm0_start_stats, vm0_end_stats)
+
+ self.verify(vm0_ret_stats['RX-packets'] == 0,
+ "Disable one uta MAC failed between VM0 and VM1 " +
+ "when enable or disable one uta MAC address on VF!")
+
+ def test_two_vms_add_multi_uta_mac_on_vf(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ packet_num = 10
+
+ for vf_mac in ["00:55:44:33:22:11", "00:55:44:33:22:66"]:
+ self.host_testpmd.execute_cmd("set port %d uta %s on" %
+ (port_id_0, vf_mac))
+ self.host_testpmd.execute_cmd("set port %d uta %s on" %
+ (port_id_0, vf_mac))
+
+ for vf_mac in ["00:55:44:33:22:11", "00:55:44:33:22:66"]:
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['ether'] = {'dest_mac': '%s' % vf_mac}
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports,
+ port_id_0, count=packet_num, **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(
+ vm0_start_stats, vm0_end_stats)
+
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num,
+ "Add MULTI uta MAC %s failed between VM0 and VM1 " % vf_mac +
+ "when add multi uta MAC address on VF!")
+
+ def test_two_vms_add_or_remove_uta_mac_on_vf(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ vf_mac = "00:55:44:33:22:11"
+ packet_num = 10
+
+ for switch in ['on', 'off', 'on']:
+ self.host_testpmd.execute_cmd("set port %d uta %s %s" %
+ (port_id_0, vf_mac, switch))
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['ether'] = {'dest_mac': '%s' % vf_mac}
+ if switch == 'on':
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports,
+ port_id_0, count=packet_num, **ether_ip)
+ else:
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports, port_id_0,
+ count=packet_num, invert_verify=True, **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(
+ vm0_start_stats, vm0_end_stats)
+
+ if switch == 'on':
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num,
+ "Add MULTI uta MAC %s failed between VM0 and VM1 " % vf_mac +
+ "when add or remove multi uta MAC address on VF!")
+ else:
+ self.verify(vm0_ret_stats['RX-packets'] == 0,
+ "Remove MULTI uta MAC %s failed between VM0 and VM1 " % vf_mac +
+ "when add or remove multi uta MAC address on VF!")
+
+ def test_two_vms_pause_rx_queues(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ packet_num = 10
+
+ for switch in ['on', 'off', 'on']:
+ self.host_testpmd.execute_cmd("set port %d vf 0 rx %s" %
+ (port_id_0, switch))
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ if switch == 'on':
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports,
+ port_id_0, count=packet_num)
+ else:
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports, port_id_0,
+ count=packet_num, invert_verify=True)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(
+ vm0_start_stats, vm0_end_stats)
+
+ if switch == 'on':
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num,
+ "Enable RX queues failed between VM0 and VM1 " +
+ "when enable or pause RX queues on VF!")
+ else:
+ self.verify(vm0_ret_stats['RX-packets'] == 0,
+ "Pause RX queues failed between VM0 and VM1 " +
+ "when enable or pause RX queues on VF!")
+
+ def test_two_vms_pause_tx_queuse(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ self.vm0_testpmd.execute_cmd("stop")
+ self.vm0_testpmd.execute_cmd("set fwd mac")
+ self.vm0_testpmd.execute_cmd("start")
+
+ port_id_0 = 0
+ packet_num = 10
+
+ for switch in ['on', 'off', 'on']:
+ self.host_testpmd.execute_cmd("set port %d vf 0 tx %s" %
+ (port_id_0, switch))
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ self.send_packet(
+ self.vm_dut_0,
+ self.vm0_dut_ports,
+ port_id_0,
+ count=packet_num)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(
+ vm0_start_stats, vm0_end_stats)
+
+ if switch == 'on':
+ self.verify(vm0_ret_stats['TX-packets'] == packet_num,
+ "Enable TX queues failed between VM0 and VM1 " +
+ "when enable or pause TX queues on VF!")
+ else:
+ self.verify(vm0_ret_stats['TX-packets'] == 0,
+ "Pause TX queues failed between VM0 and VM1 " +
+ "when enable or pause TX queues on VF!")
+
+ def test_two_vms_prevent_rx_broadcast_on_vf(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ port_id_0 = 0
+ vf_mac = "FF:FF:FF:FF:FF:FF"
+ packet_num = 10
+
+ for switch in ['on', 'off', 'on']:
+ self.host_testpmd.execute_cmd("set port %d vf 0 rxmode BAM %s" %
+ (port_id_0, switch))
+
+ vm0_start_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+ ether_ip = {}
+ ether_ip['ether'] = {'dest_mac': '%s' % vf_mac}
+ if switch == 'on':
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports, port_id_0,
+ count=packet_num, **ether_ip)
+ else:
+ self.send_packet(self.vm_dut_0, self.vm0_dut_ports, port_id_0,
+ count=packet_num, invert_verify=True, **ether_ip)
+ vm0_end_stats = self.vm0_testpmd.get_pmd_stats(port_id_0)
+
+ vm0_ret_stats = self.calculate_stats(
+ vm0_start_stats, vm0_end_stats)
+
+ if switch == 'on':
+ self.verify(vm0_ret_stats['RX-packets'] == packet_num,
+ "Enable RX broadcast failed between VM0 and VM1 " +
+ "when enable or disable RX queues on VF!")
+ else:
+ self.verify(vm0_ret_stats['RX-packets'] == 0,
+ "Disable RX broadcast failed between VM0 and VM1 " +
+ "when enable or pause TX queues on VF!")
+
+ def test_two_vms_negative_input_commands(self):
+ self.setup_2vm_2vf_env()
+ self.setup_two_vm_common_prerequisite()
+
+ for command in ["set port 0 vf 65 tx on",
+ "set port 2 vf -1 tx off",
+ "set port 0 vf 0 rx oneee",
+ "set port 0 vf 0 rx offdd",
+ "set port 0 vf 64 rxmode BAM on",
+ "set port 0 vf 64 rxmode BAM off",
+ "set port 0 uta 00:11:22:33:44 on",
+ "set port 7 uta 00:55:44:33:22:11 off",
+ "set port 0 vf 34 rxmode ROPE on",
+ "mac_addr add port 0 vf 65 00:55:44:33:22:11",
+ "mac_addr add port 5 vf 0 00:55:44:88:22:11",
+ "set port 0 mirror-rule 0xf uplink-mirror dst-pool 1 on",
+ "set port 0 mirror-rule 2 vlan-mirror 9 dst-pool 1 on",
+ "set port 0 mirror-rule 0 downlink-mirror 0xf dst-pool 2 off",
+ "reset port 0 mirror-rule 4",
+ "reset port 0xff mirror-rule 0"]:
+ output = self.host_testpmd.execute_cmd(command)
+ error = False
+
+ for error_regx in [r'Bad', r'bad', r'failed', r'-[0-9]+', r'error', r'Invalid']:
+ ret_regx = re.search(error_regx, output)
+ if ret_regx and ret_regx.group():
+ error = True
+ break
+ self.verify(
+ error, "Execute command '%s' successfully, it should be failed!" % command)
+
+ def tear_down(self):
+ if self.setup_2vm_prerequisite_flag == 1:
+ self.destroy_two_vm_common_prerequisite()
+ if self.setup_2vm_2vf_env_flag == 1:
+ self.destroy_2vm_2vf_env()
+
+ if self.setup_2vm_2pf_env_flag == 1:
+ slef.destroy_2vm_2pf_env()
+
+ if self.setup_4vm_prerequisite_flag == 1:
+ self.destroy_four_vm_common_prerequisite()
+ if self.setup_4vm_4vf_env_flag == 1:
+ self.destroy_4vm_4vf_env()
+
+ def tear_down_all(self):
+ if getattr(self, 'vm0', None):
+ self.vm0.stop()
+ if getattr(self, 'vm1', None):
+ self.vm1.stop()
+ if getattr(self, 'vm2', None):
+ self.vm2.stop()
+ if getattr(self, 'vm3', None):
+ self.vm3.stop()
+
+ for port_id in self.dut_ports:
+ self.dut.destroy_sriov_vfs_by_port(port_id)
--
1.9.0
next prev parent reply other threads:[~2015-05-18 5:07 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-18 5:07 [dts] [‘dts-v1’ 0/9] sjiajiax
2015-05-18 5:07 ` [dts] [‘dts-v1’ 1/9] Abstract the NIC device as the single class NetDevice sjiajiax
2015-05-18 7:46 ` Xu, HuilongX
2015-05-18 8:58 ` Jiajia, SunX
2015-05-18 5:07 ` [dts] [‘dts-v1’ 2/9] Optimize ssh connection sjiajiax
2015-05-18 7:06 ` Liu, Yong
2015-05-18 7:43 ` Jiajia, SunX
2015-05-19 0:38 ` Liu, Yong
2015-05-19 7:05 ` Jiajia, SunX
2015-05-18 5:07 ` [dts] [‘dts-v1’ 3/9] Add some params and functions related to the virtual test sjiajiax
2015-05-18 7:26 ` Liu, Yong
2015-05-18 8:08 ` Jiajia, SunX
2015-05-18 7:59 ` Xu, HuilongX
2015-05-18 9:08 ` Jiajia, SunX
2015-05-18 5:07 ` [dts] [‘dts-v1’ 4/9] Add VM class and the virtual DUT class and the virtual resource module sjiajiax
2015-05-18 8:23 ` Xu, HuilongX
2015-05-18 13:57 ` Liu, Yong
2015-05-19 5:46 ` Jiajia, SunX
2015-05-18 5:07 ` [dts] [‘dts-v1’ 5/9] Add qemu-agent-guest for QEMU VM sjiajiax
2015-05-18 14:00 ` Liu, Yong
2015-05-18 5:07 ` [dts] [‘dts-v1’ 6/9] Add a global virtual configure sjiajiax
2015-05-18 6:32 ` Liu, Yong
2015-05-18 6:48 ` Jiajia, SunX
2015-05-18 5:07 ` [dts] [‘dts-v1’ 7/9] add some pmd functions for tester to code the testpmd cases sjiajiax
2015-05-18 8:28 ` Xu, HuilongX
2015-05-18 8:45 ` Liu, Yong
2015-05-18 9:05 ` Jiajia, SunX
2015-05-18 9:20 ` Jiajia, SunX
2015-05-18 5:07 ` [dts] [‘dts-v1’ 8/9] Add two tar files for ACL testing sjiajiax
2015-05-18 14:02 ` Liu, Yong
2015-05-19 5:49 ` Jiajia, SunX
2015-05-18 5:07 ` sjiajiax [this message]
2015-05-18 6:29 ` [dts] [‘dts-v1’ 0/9] Liu, Yong
2015-05-18 6:47 ` Jiajia, SunX
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1431925646-1314-10-git-send-email-sunx.jiajia@intel.com \
--to=sunx.jiajia@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).