* [dts] [PATCH] tests: Add VF packet drop and performance test
@ 2016-01-19 8:07 Yulong Pei
2016-02-01 8:36 ` Liu, Yong
0 siblings, 1 reply; 4+ messages in thread
From: Yulong Pei @ 2016-01-19 8:07 UTC (permalink / raw)
To: dts
1.vf_perf.cfg: vm setting and qemu parameters.
2.vf_perf_test_plan.rst: test plan, describe test cases.
3.TestSuite_vf_perf.py: implement test cases according to the test plan.
Signed-off-by: Yulong Pei <yulong.pei@intel.com>
---
conf/vf_perf.cfg | 105 ++++++++++++++++++++
test_plans/vf_perf_test_plan.rst | 179 ++++++++++++++++++++++++++++++++++
tests/TestSuite_vf_perf.py | 201 +++++++++++++++++++++++++++++++++++++++
3 files changed, 485 insertions(+)
create mode 100644 conf/vf_perf.cfg
create mode 100644 test_plans/vf_perf_test_plan.rst
create mode 100644 tests/TestSuite_vf_perf.py
diff --git a/conf/vf_perf.cfg b/conf/vf_perf.cfg
new file mode 100644
index 0000000..986d289
--- /dev/null
+++ b/conf/vf_perf.cfg
@@ -0,0 +1,105 @@
+# QEMU options
+# name
+# name: vm0
+#
+# enable_kvm
+# enable: [yes | no]
+#
+# cpu
+# model: [host | core2duo | ...]
+# usage:
+# choose model value from the command
+# qemu-system-x86_64 -cpu help
+# number: '4' #number of vcpus
+# cpupin: '3 4 5 6' # host cpu list
+#
+# mem
+# size: 1024
+#
+# disk
+# file: /path/to/image/test.img
+#
+# net
+# type: [nic | user | tap | bridge | ...]
+# nic
+# opt_vlan: 0
+# note: Default is 0.
+# opt_macaddr: 00:00:00:00:01:01
+# note: if creating a nic, it`s better to specify a MAC,
+# else it will get a random number.
+# opt_model:["e1000" | "virtio" | "i82551" | ...]
+# note: Default is e1000.
+# opt_name: 'nic1'
+# opt_addr: ''
+# note: PCI cards only.
+# opt_vectors:
+# note: This option currently only affects virtio cards.
+# user
+# opt_vlan: 0
+# note: default is 0.
+# opt_hostfwd: [tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
+# note: If not specified, it will be setted automatically.
+# tap
+# opt_vlan: 0
+# note: default is 0.
+# opt_br: br0
+# note: if choosing tap, need to specify bridge name,
+# else it will be br0.
+# opt_script: QEMU_IFUP_PATH
+# note: if not specified, default is self.QEMU_IFUP_PATH.
+# opt_downscript: QEMU_IFDOWN_PATH
+# note: if not specified, default is self.QEMU_IFDOWN_PATH.
+#
+# device
+# driver: [pci-assign | virtio-net-pci | ...]
+# pci-assign
+# prop_host: 08:00.0
+# prop_addr: 00:00:00:00:01:02
+# virtio-net-pci
+# prop_netdev: mynet1
+# prop_id: net1
+# prop_mac: 00:00:00:00:01:03
+# prop_bus: pci.0
+# prop_addr: 0x3
+#
+# monitor
+# port: 6061
+# note: if adding monitor to vm, need to specicy
+# this port, else it will get a free port
+# on the host machine.
+#
+# qga
+# enable: [yes | no]
+#
+# serial_port
+# enable: [yes | no]
+#
+# vnc
+# displayNum: 1
+# note: you can choose a number not used on the host.
+#
+# daemon
+# enable: 'yes'
+# note:
+# By default VM will start with the daemonize status.
+# Not support starting it on the stdin now.
+
+# vm configuration for pmd sriov case
+[vm0]
+cpu =
+ model=host,number=4,cpupin=5 6 7 8;
+disk =
+ file=/home/image/sriov-fc20-1.img;
+login =
+ user=root,password=tester;
+net =
+ type=nic,opt_vlan=0;
+ type=user,opt_vlan=0;
+monitor =
+ port=;
+qga =
+ enable=yes;
+vnc =
+ displayNum=1;
+daemon =
+ enable=yes;
diff --git a/test_plans/vf_perf_test_plan.rst b/test_plans/vf_perf_test_plan.rst
new file mode 100644
index 0000000..059e1bd
--- /dev/null
+++ b/test_plans/vf_perf_test_plan.rst
@@ -0,0 +1,179 @@
+.. Copyright (c) <2015>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Test Case 1: Measure packet loss with kernel PF & dpdk VF
+===================================
+
+1. got the pci device id of DUT, for example,
+
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
+
+2. create 2 VFs from 1 PF,
+
+echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
+0000:81:02.0 'XL710/X710 Virtual Function' unused=
+0000:81:02.1 'XL710/X710 Virtual Function' unused=
+
+3. detach VFs from the host, bind them to pci-stub driver,
+
+virsh nodedev-detach pci_0000_81_02_0;
+virsh nodedev-detach pci_0000_81_02_1;
+
+./dpdk_nic_bind.py --st
+
+0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+
+4. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
+
+/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
+-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
+-device pci-assign,host=81:02.0,id=pt_0 \
+-device pci-assign,host=81:02.1,id=pt_1
+
+5. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
+and then start testpmd, set it in mac forward mode,
+
+./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
+./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
+
+testpmd> set fwd mac
+testpmd> start
+
+6. using ixia traffic generator to send 64 bytes packet with 10% line rate to VF, verify packet loss < 0.0001.
+
+Test Case 2: Measure performace with kernel PF & dpdk VF
+========================================================
+
+1. setup test environment as Test Case 1, step 1-5.
+
+2. Measure maximum RFC2544 performance throughput for the following packet sizes,
+
+frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
+
+The output format should be as below, with figures given in mpps:
+
++------------+-------+
+| Size\Cores | all |
++------------+-------+
+| 64-byte | |
++------------+-------+
+| 128-byte | |
++------------+-------+
+| 256-byte | |
++------------+-------+
+| 512-byte | |
++------------+-------+
+| 1024-byte | |
++------------+-------+
+| 1280-byte | |
++------------+-------+
+| 1518-byte | |
++------------+-------+
+
+
+Test Case 3: Measure performace with dpdk PF & dpdk VF
+======================================================
+
+1. got the pci device id of DUT and bind it to igb_uio driver, for example,
+
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
+
+./dpdk_nic_bind.py --bind=igb_uio 81:00.0
+
+2. create 2 VFs from 1 PF,
+
+echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/max_vfs
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
+0000:81:02.0 'XL710/X710 Virtual Function' unused=
+0000:81:02.1 'XL710/X710 Virtual Function' unused=
+
+3. detach VFs from the host, bind them to pci-stub driver,
+
+./dpdk_nic_bind.py --bind=pci-stub 81:02.0 81:02.1
+./dpdk_nic_bind.py --st
+
+0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+
+4. bind PF 81:00.0 to testpmd and start it on the host,
+
+./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 81:00.0 -- -i --portmask=0x1 --txqflags=0
+
+5. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
+
+/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
+-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
+-device pci-assign,host=81:02.0,id=pt_0 \
+-device pci-assign,host=81:02.1,id=pt_1
+
+6. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
+and then start testpmd, set it in mac forward mode,
+
+./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
+./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
+
+testpmd> set fwd mac
+testpmd> start
+
+7. Measure maximum RFC2544 performance throughput for the following packet sizes,
+
+frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
+
+The output format should be as below, with figures given in mpps:
+
++------------+-------+
+| Size\Cores | all |
++------------+-------+
+| 64-byte | |
++------------+-------+
+| 128-byte | |
++------------+-------+
+| 256-byte | |
++------------+-------+
+| 512-byte | |
++------------+-------+
+| 1024-byte | |
++------------+-------+
+| 1280-byte | |
++------------+-------+
+| 1518-byte | |
++------------+-------+
diff --git a/tests/TestSuite_vf_perf.py b/tests/TestSuite_vf_perf.py
new file mode 100644
index 0000000..c95293f
--- /dev/null
+++ b/tests/TestSuite_vf_perf.py
@@ -0,0 +1,201 @@
+# <COPYRIGHT_TAG>
+
+import re
+import time
+
+import dts
+from qemu_kvm import QEMUKvm
+from test_case import TestCase
+from pmd_output import PmdOutput
+from etgen import IxiaPacketGenerator
+
+VM_CORES_MASK = 'all'
+
+class TestVfPerf(TestCase, IxiaPacketGenerator):
+
+ def set_up_all(self):
+
+ self.tester.extend_external_packet_generator(TestVfPerf, self)
+
+ self.dut_ports = self.dut.get_ports(self.nic)
+ self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
+
+ self.core_configs = []
+ self.core_configs.append({'cores': 'all', 'pps': {}})
+
+ self.vm0 = None
+
+ def set_up(self):
+
+ self.setup_2vf_1vm_env_flag = 0
+
+ def setup_2vf_1vm_env(self, driver='default'):
+
+ self.used_dut_port = self.dut_ports[0]
+ self.dut.generate_sriov_vfs_by_port(self.used_dut_port, 2, driver=driver)
+ self.sriov_vfs_port = self.dut.ports_info[self.used_dut_port]['vfs_port']
+
+ try:
+
+ for port in self.sriov_vfs_port:
+ print port.pci
+ port.bind_driver('pci-stub')
+
+ time.sleep(1)
+ vf0_prop = {'opt_host': self.sriov_vfs_port[0].pci}
+ vf1_prop = {'opt_host': self.sriov_vfs_port[1].pci}
+
+ for port_id in self.dut_ports:
+ if port_id == self.used_dut_port:
+ continue
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver()
+
+ if driver == 'igb_uio':
+ self.host_testpmd = PmdOutput(self.dut)
+ eal_param = '-b %(vf0)s -b %(vf1)s' % {'vf0': self.sriov_vfs_port[0].pci,
+ 'vf1': self.sriov_vfs_port[1].pci}
+ self.host_testpmd.start_testpmd("1S/2C/2T", eal_param=eal_param)
+
+ # set up VM0 ENV
+ self.vm0 = QEMUKvm(self.dut, 'vm0', 'vf_perf')
+ self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
+ self.vm0.set_vm_device(driver='pci-assign', **vf1_prop)
+ self.vm_dut_0 = self.vm0.start()
+ if self.vm_dut_0 is None:
+ raise Exception("Set up VM0 ENV failed!")
+
+ self.setup_2vf_1vm_env_flag = 1
+ except Exception as e:
+ self.destroy_2vf_1vm_env()
+ raise Exception(e)
+
+ def destroy_2vf_1vm_env(self):
+ if getattr(self, 'vm0', None):
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm0_testpmd.execute_cmd('quit', '# ')
+ self.vm0_testpmd = None
+ self.vm0_dut_ports = None
+ self.vm_dut_0 = None
+ self.vm0.stop()
+ self.vm0 = None
+
+ if getattr(self, 'host_testpmd', None):
+ self.host_testpmd.execute_cmd('quit', '# ')
+ self.host_testpmd = None
+
+ if getattr(self, 'used_dut_port', None):
+ self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
+ port = self.dut.ports_info[self.used_dut_port]['port']
+ port.bind_driver()
+ self.used_dut_port = None
+
+ for port_id in self.dut_ports:
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver()
+
+ self.setup_2vf_1vm_env_flag = 0
+
+ def test_perf_kernel_pf_dpdk_vf_packet_loss(self):
+
+ self.setup_2vf_1vm_env(driver='')
+
+ self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
+ port_id_0 = 0
+ self.vm0_testpmd = PmdOutput(self.vm_dut_0)
+ self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
+ self.vm0_testpmd.execute_cmd('show port info all')
+ pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
+ self.vm0_testpmd.execute_cmd('set fwd mac')
+ self.vm0_testpmd.execute_cmd('start')
+
+ time.sleep(2)
+
+ tx_port = self.tester.get_local_port(self.dut_ports[0])
+ rx_port = tx_port
+ dst_mac = pmd0_vf0_mac
+ src_mac = self.tester.get_mac(tx_port)
+
+ self.tester.scapy_append('dmac="%s"' % dst_mac)
+ self.tester.scapy_append('smac="%s"' % src_mac)
+ self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/IP(len=46)/UDP(len=26)/("X"*18)]')
+ self.tester.scapy_append('wrpcap("test.pcap", flows)')
+ self.tester.scapy_execute()
+
+ loss, _, _ = self.tester.traffic_generator_loss([(tx_port, rx_port, "test.pcap")], 10, delay=180)
+
+ self.verify(loss < 0.0001, "Excessive packet loss when sending 64 bytes packet with 10% line rate")
+
+ def measure_vf_performance(self, driver='default'):
+
+ if driver == 'igb_uio':
+ self.setup_2vf_1vm_env(driver='igb_uio')
+ else:
+ self.setup_2vf_1vm_env(driver='')
+
+ self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
+ port_id_0 = 0
+ self.vm0_testpmd = PmdOutput(self.vm_dut_0)
+ self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
+ self.vm0_testpmd.execute_cmd('show port info all')
+ pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
+ self.vm0_testpmd.execute_cmd('set fwd mac')
+ self.vm0_testpmd.execute_cmd('start')
+
+ time.sleep(2)
+
+ frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
+
+ for config in self.core_configs:
+ self.dut.kill_all()
+ cores = self.dut.get_core_list(config['cores'])
+
+ tx_port = self.tester.get_local_port(self.dut_ports[0])
+ rx_port = tx_port
+ dst_mac = pmd0_vf0_mac
+ src_mac = self.tester.get_mac(tx_port)
+
+ global size
+ for size in frameSizes:
+ self.tester.scapy_append('dmac="%s"' % dst_mac)
+ self.tester.scapy_append('smac="%s"' % src_mac)
+ self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/("X"*%d)]' % (size - 18))
+ self.tester.scapy_append('wrpcap("test.pcap", flows)')
+ self.tester.scapy_execute()
+ tgenInput = []
+ tgenInput.append((tx_port, rx_port, "test.pcap"))
+ _, pps = self.tester.traffic_generator_throughput(tgenInput)
+ config['pps'][size] = pps
+
+ for n in range(len(self.core_configs)):
+ for size in frameSizes:
+ self.verify(
+ self.core_configs[n]['pps'][size] is not 0, "No traffic detected")
+
+ # Print results
+ dts.results_table_add_header(['Frame size'] + [n['cores'] for n in self.core_configs])
+ for size in frameSizes:
+ dts.results_table_add_row([size] + [n['pps'][size] for n in self.core_configs])
+ dts.results_table_print()
+
+ def test_perf_kernel_pf_dpdk_vf_performance(self):
+
+ self.measure_vf_performance(driver='')
+
+ def test_perf_dpdk_pf_dpdk_vf_performance(self):
+
+ self.measure_vf_performance(driver='igb_uio')
+
+ def tear_down(self):
+
+ if self.setup_2vf_1vm_env_flag == 1:
+ self.destroy_2vf_1vm_env()
+
+ def tear_down_all(self):
+
+ if getattr(self, 'vm0', None):
+ self.vm0.stop()
+
+ for port_id in self.dut_ports:
+ self.dut.destroy_sriov_vfs_by_port(port_id)
+
--
2.1.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dts] [PATCH] tests: Add VF packet drop and performance test
2016-01-19 8:07 [dts] [PATCH] tests: Add VF packet drop and performance test Yulong Pei
@ 2016-02-01 8:36 ` Liu, Yong
0 siblings, 0 replies; 4+ messages in thread
From: Liu, Yong @ 2016-02-01 8:36 UTC (permalink / raw)
To: Yulong Pei, dts
Hi Yulong,
Some questions for VF performance test plan.
BTW, as I known that l3fwd case is focusing in performance validation
and has been optimized for throughout.
On 01/19/2016 04:07 PM, Yulong Pei wrote:
> 1.vf_perf.cfg: vm setting and qemu parameters.
> 2.vf_perf_test_plan.rst: test plan, describe test cases.
> 3.TestSuite_vf_perf.py: implement test cases according to the test plan.
>
> Signed-off-by: Yulong Pei <yulong.pei@intel.com>
> ---
> conf/vf_perf.cfg | 105 ++++++++++++++++++++
> test_plans/vf_perf_test_plan.rst | 179 ++++++++++++++++++++++++++++++++++
> tests/TestSuite_vf_perf.py | 201 +++++++++++++++++++++++++++++++++++++++
> 3 files changed, 485 insertions(+)
> create mode 100644 conf/vf_perf.cfg
> create mode 100644 test_plans/vf_perf_test_plan.rst
> create mode 100644 tests/TestSuite_vf_perf.py
>
> diff --git a/conf/vf_perf.cfg b/conf/vf_perf.cfg
> new file mode 100644
> index 0000000..986d289
> --- /dev/null
> +++ b/conf/vf_perf.cfg
> @@ -0,0 +1,105 @@
> +# QEMU options
> +# name
> +# name: vm0
> +#
> +# enable_kvm
> +# enable: [yes | no]
> +#
> +# cpu
> +# model: [host | core2duo | ...]
> +# usage:
> +# choose model value from the command
> +# qemu-system-x86_64 -cpu help
> +# number: '4' #number of vcpus
> +# cpupin: '3 4 5 6' # host cpu list
> +#
> +# mem
> +# size: 1024
> +#
> +# disk
> +# file: /path/to/image/test.img
> +#
> +# net
> +# type: [nic | user | tap | bridge | ...]
> +# nic
> +# opt_vlan: 0
> +# note: Default is 0.
> +# opt_macaddr: 00:00:00:00:01:01
> +# note: if creating a nic, it`s better to specify a MAC,
> +# else it will get a random number.
> +# opt_model:["e1000" | "virtio" | "i82551" | ...]
> +# note: Default is e1000.
> +# opt_name: 'nic1'
> +# opt_addr: ''
> +# note: PCI cards only.
> +# opt_vectors:
> +# note: This option currently only affects virtio cards.
> +# user
> +# opt_vlan: 0
> +# note: default is 0.
> +# opt_hostfwd: [tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
> +# note: If not specified, it will be setted automatically.
> +# tap
> +# opt_vlan: 0
> +# note: default is 0.
> +# opt_br: br0
> +# note: if choosing tap, need to specify bridge name,
> +# else it will be br0.
> +# opt_script: QEMU_IFUP_PATH
> +# note: if not specified, default is self.QEMU_IFUP_PATH.
> +# opt_downscript: QEMU_IFDOWN_PATH
> +# note: if not specified, default is self.QEMU_IFDOWN_PATH.
> +#
> +# device
> +# driver: [pci-assign | virtio-net-pci | ...]
> +# pci-assign
> +# prop_host: 08:00.0
> +# prop_addr: 00:00:00:00:01:02
> +# virtio-net-pci
> +# prop_netdev: mynet1
> +# prop_id: net1
> +# prop_mac: 00:00:00:00:01:03
> +# prop_bus: pci.0
> +# prop_addr: 0x3
> +#
> +# monitor
> +# port: 6061
> +# note: if adding monitor to vm, need to specicy
> +# this port, else it will get a free port
> +# on the host machine.
> +#
> +# qga
> +# enable: [yes | no]
> +#
> +# serial_port
> +# enable: [yes | no]
> +#
> +# vnc
> +# displayNum: 1
> +# note: you can choose a number not used on the host.
> +#
> +# daemon
> +# enable: 'yes'
> +# note:
> +# By default VM will start with the daemonize status.
> +# Not support starting it on the stdin now.
> +
> +# vm configuration for pmd sriov case
> +[vm0]
> +cpu =
> + model=host,number=4,cpupin=5 6 7 8;
> +disk =
> + file=/home/image/sriov-fc20-1.img;
> +login =
> + user=root,password=tester;
> +net =
> + type=nic,opt_vlan=0;
> + type=user,opt_vlan=0;
> +monitor =
> + port=;
> +qga =
> + enable=yes;
> +vnc =
> + displayNum=1;
> +daemon =
> + enable=yes;
> diff --git a/test_plans/vf_perf_test_plan.rst b/test_plans/vf_perf_test_plan.rst
> new file mode 100644
> index 0000000..059e1bd
> --- /dev/null
> +++ b/test_plans/vf_perf_test_plan.rst
> @@ -0,0 +1,179 @@
> +.. Copyright (c) <2015>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +Test Case 1: Measure packet loss with kernel PF & dpdk VF
> +===================================
> +
> +1. got the pci device id of DUT, for example,
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
> +
> +2. create 2 VFs from 1 PF,
> +
> +echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
> +0000:81:02.0 'XL710/X710 Virtual Function' unused=
> +0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> +3. detach VFs from the host, bind them to pci-stub driver,
> +
> +virsh nodedev-detach pci_0000_81_02_0;
> +virsh nodedev-detach pci_0000_81_02_1;
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +
> +4. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
> +
> +/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
> +-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
> +-device pci-assign,host=81:02.0,id=pt_0 \
> +-device pci-assign,host=81:02.1,id=pt_1
> +
> +5. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
> +and then start testpmd, set it in mac forward mode,
> +
> +./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
> +
> +testpmd> set fwd mac
> +testpmd> start
> +
> +6. using ixia traffic generator to send 64 bytes packet with 10% line rate to VF, verify packet loss < 0.0001.
> +
Should we also validate packet sequence and content integrity?
> +Test Case 2: Measure performace with kernel PF & dpdk VF
> +========================================================
> +
> +1. setup test environment as Test Case 1, step 1-5.
> +
> +2. Measure maximum RFC2544 performance throughput for the following packet sizes,
> +
> +frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> +The output format should be as below, with figures given in mpps:
> +
We need also measure performance of multiple queues.
> ++------------+-------+
> +| Size\Cores | all |
> ++------------+-------+
> +| 64-byte | |
> ++------------+-------+
> +| 128-byte | |
> ++------------+-------+
> +| 256-byte | |
> ++------------+-------+
> +| 512-byte | |
> ++------------+-------+
> +| 1024-byte | |
> ++------------+-------+
> +| 1280-byte | |
> ++------------+-------+
> +| 1518-byte | |
> ++------------+-------+
> +
> +
> +Test Case 3: Measure performace with dpdk PF & dpdk VF
> +======================================================
> +
> +1. got the pci device id of DUT and bind it to igb_uio driver, for example,
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
> +
> +./dpdk_nic_bind.py --bind=igb_uio 81:00.0
> +
> +2. create 2 VFs from 1 PF,
> +
> +echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/max_vfs
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
> +0000:81:02.0 'XL710/X710 Virtual Function' unused=
> +0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> +3. detach VFs from the host, bind them to pci-stub driver,
> +
> +./dpdk_nic_bind.py --bind=pci-stub 81:02.0 81:02.1
> +./dpdk_nic_bind.py --st
> +
> +0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +
> +4. bind PF 81:00.0 to testpmd and start it on the host,
> +
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 81:00.0 -- -i --portmask=0x1 --txqflags=0
> +
> +5. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
> +
> +/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
> +-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
> +-device pci-assign,host=81:02.0,id=pt_0 \
> +-device pci-assign,host=81:02.1,id=pt_1
> +
> +6. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
> +and then start testpmd, set it in mac forward mode,
> +
> +./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
> +
> +testpmd> set fwd mac
> +testpmd> start
> +
> +7. Measure maximum RFC2544 performance throughput for the following packet sizes,
> +
> +frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> +The output format should be as below, with figures given in mpps:
> +
> ++------------+-------+
> +| Size\Cores | all |
> ++------------+-------+
> +| 64-byte | |
> ++------------+-------+
> +| 128-byte | |
> ++------------+-------+
> +| 256-byte | |
> ++------------+-------+
> +| 512-byte | |
> ++------------+-------+
> +| 1024-byte | |
> ++------------+-------+
> +| 1280-byte | |
> ++------------+-------+
> +| 1518-byte | |
> ++------------+-------+
> diff --git a/tests/TestSuite_vf_perf.py b/tests/TestSuite_vf_perf.py
> new file mode 100644
> index 0000000..c95293f
> --- /dev/null
> +++ b/tests/TestSuite_vf_perf.py
> @@ -0,0 +1,201 @@
> +# <COPYRIGHT_TAG>
> +
> +import re
> +import time
> +
> +import dts
> +from qemu_kvm import QEMUKvm
> +from test_case import TestCase
> +from pmd_output import PmdOutput
> +from etgen import IxiaPacketGenerator
> +
> +VM_CORES_MASK = 'all'
> +
> +class TestVfPerf(TestCase, IxiaPacketGenerator):
> +
> + def set_up_all(self):
> +
> + self.tester.extend_external_packet_generator(TestVfPerf, self)
> +
> + self.dut_ports = self.dut.get_ports(self.nic)
> + self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
> +
> + self.core_configs = []
> + self.core_configs.append({'cores': 'all', 'pps': {}})
> +
> + self.vm0 = None
> +
> + def set_up(self):
> +
> + self.setup_2vf_1vm_env_flag = 0
> +
> + def setup_2vf_1vm_env(self, driver='default'):
> +
> + self.used_dut_port = self.dut_ports[0]
> + self.dut.generate_sriov_vfs_by_port(self.used_dut_port, 2, driver=driver)
> + self.sriov_vfs_port = self.dut.ports_info[self.used_dut_port]['vfs_port']
> +
> + try:
> +
> + for port in self.sriov_vfs_port:
> + print port.pci
> + port.bind_driver('pci-stub')
> +
> + time.sleep(1)
> + vf0_prop = {'opt_host': self.sriov_vfs_port[0].pci}
> + vf1_prop = {'opt_host': self.sriov_vfs_port[1].pci}
> +
> + for port_id in self.dut_ports:
> + if port_id == self.used_dut_port:
> + continue
> + port = self.dut.ports_info[port_id]['port']
> + port.bind_driver()
> +
> + if driver == 'igb_uio':
> + self.host_testpmd = PmdOutput(self.dut)
> + eal_param = '-b %(vf0)s -b %(vf1)s' % {'vf0': self.sriov_vfs_port[0].pci,
> + 'vf1': self.sriov_vfs_port[1].pci}
> + self.host_testpmd.start_testpmd("1S/2C/2T", eal_param=eal_param)
> +
> + # set up VM0 ENV
> + self.vm0 = QEMUKvm(self.dut, 'vm0', 'vf_perf')
> + self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
> + self.vm0.set_vm_device(driver='pci-assign', **vf1_prop)
> + self.vm_dut_0 = self.vm0.start()
> + if self.vm_dut_0 is None:
> + raise Exception("Set up VM0 ENV failed!")
> +
> + self.setup_2vf_1vm_env_flag = 1
> + except Exception as e:
> + self.destroy_2vf_1vm_env()
> + raise Exception(e)
> +
> + def destroy_2vf_1vm_env(self):
> + if getattr(self, 'vm0', None):
> + self.vm0_testpmd.execute_cmd('stop')
> + self.vm0_testpmd.execute_cmd('quit', '# ')
> + self.vm0_testpmd = None
> + self.vm0_dut_ports = None
> + self.vm_dut_0 = None
> + self.vm0.stop()
> + self.vm0 = None
> +
> + if getattr(self, 'host_testpmd', None):
> + self.host_testpmd.execute_cmd('quit', '# ')
> + self.host_testpmd = None
> +
> + if getattr(self, 'used_dut_port', None):
> + self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
> + port = self.dut.ports_info[self.used_dut_port]['port']
> + port.bind_driver()
> + self.used_dut_port = None
> +
> + for port_id in self.dut_ports:
> + port = self.dut.ports_info[port_id]['port']
> + port.bind_driver()
> +
> + self.setup_2vf_1vm_env_flag = 0
> +
> + def test_perf_kernel_pf_dpdk_vf_packet_loss(self):
> +
> + self.setup_2vf_1vm_env(driver='')
> +
> + self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
> + port_id_0 = 0
> + self.vm0_testpmd = PmdOutput(self.vm_dut_0)
> + self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
> + self.vm0_testpmd.execute_cmd('show port info all')
> + pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
> + self.vm0_testpmd.execute_cmd('set fwd mac')
> + self.vm0_testpmd.execute_cmd('start')
> +
> + time.sleep(2)
> +
> + tx_port = self.tester.get_local_port(self.dut_ports[0])
> + rx_port = tx_port
> + dst_mac = pmd0_vf0_mac
> + src_mac = self.tester.get_mac(tx_port)
> +
> + self.tester.scapy_append('dmac="%s"' % dst_mac)
> + self.tester.scapy_append('smac="%s"' % src_mac)
> + self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/IP(len=46)/UDP(len=26)/("X"*18)]')
> + self.tester.scapy_append('wrpcap("test.pcap", flows)')
> + self.tester.scapy_execute()
> +
> + loss, _, _ = self.tester.traffic_generator_loss([(tx_port, rx_port, "test.pcap")], 10, delay=180)
> +
> + self.verify(loss < 0.0001, "Excessive packet loss when sending 64 bytes packet with 10% line rate")
> +
> + def measure_vf_performance(self, driver='default'):
> +
> + if driver == 'igb_uio':
> + self.setup_2vf_1vm_env(driver='igb_uio')
> + else:
> + self.setup_2vf_1vm_env(driver='')
> +
> + self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
> + port_id_0 = 0
> + self.vm0_testpmd = PmdOutput(self.vm_dut_0)
> + self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
> + self.vm0_testpmd.execute_cmd('show port info all')
> + pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
> + self.vm0_testpmd.execute_cmd('set fwd mac')
> + self.vm0_testpmd.execute_cmd('start')
> +
> + time.sleep(2)
> +
> + frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> + for config in self.core_configs:
> + self.dut.kill_all()
> + cores = self.dut.get_core_list(config['cores'])
> +
> + tx_port = self.tester.get_local_port(self.dut_ports[0])
> + rx_port = tx_port
> + dst_mac = pmd0_vf0_mac
> + src_mac = self.tester.get_mac(tx_port)
> +
> + global size
> + for size in frameSizes:
> + self.tester.scapy_append('dmac="%s"' % dst_mac)
> + self.tester.scapy_append('smac="%s"' % src_mac)
> + self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/("X"*%d)]' % (size - 18))
> + self.tester.scapy_append('wrpcap("test.pcap", flows)')
> + self.tester.scapy_execute()
> + tgenInput = []
> + tgenInput.append((tx_port, rx_port, "test.pcap"))
> + _, pps = self.tester.traffic_generator_throughput(tgenInput)
> + config['pps'][size] = pps
> +
> + for n in range(len(self.core_configs)):
> + for size in frameSizes:
> + self.verify(
> + self.core_configs[n]['pps'][size] is not 0, "No traffic detected")
> +
> + # Print results
> + dts.results_table_add_header(['Frame size'] + [n['cores'] for n in self.core_configs])
> + for size in frameSizes:
> + dts.results_table_add_row([size] + [n['pps'][size] for n in self.core_configs])
> + dts.results_table_print()
> +
> + def test_perf_kernel_pf_dpdk_vf_performance(self):
> +
> + self.measure_vf_performance(driver='')
> +
> + def test_perf_dpdk_pf_dpdk_vf_performance(self):
> +
> + self.measure_vf_performance(driver='igb_uio')
> +
> + def tear_down(self):
> +
> + if self.setup_2vf_1vm_env_flag == 1:
> + self.destroy_2vf_1vm_env()
> +
> + def tear_down_all(self):
> +
> + if getattr(self, 'vm0', None):
> + self.vm0.stop()
> +
> + for port_id in self.dut_ports:
> + self.dut.destroy_sriov_vfs_by_port(port_id)
> +
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dts] [PATCH] tests: Add VF packet drop and performance test
2016-02-29 8:48 Yulong Pei
@ 2016-03-02 5:39 ` Liu, Yong
0 siblings, 0 replies; 4+ messages in thread
From: Liu, Yong @ 2016-03-02 5:39 UTC (permalink / raw)
To: Yulong Pei, dts
Hi Yulong,
On 02/29/2016 04:48 PM, Yulong Pei wrote:
> 1.vf_perf.cfg: vm setting and qemu parameters.
> 2.vf_perf_test_plan.rst: test plan, describe test cases.
> 3.TestSuite_vf_perf.py: implement test cases according to the test plan.
> 4.using l3fwd to measure performance instead of testpmd.
>
> Signed-off-by: Yulong Pei <yulong.pei@intel.com>
> ---
> conf/vf_perf.cfg | 107 +++++++++++++++++++
> test_plans/vf_perf_test_plan.rst | 182 ++++++++++++++++++++++++++++++++
> tests/TestSuite_vf_perf.py | 222 +++++++++++++++++++++++++++++++++++++++
> 3 files changed, 511 insertions(+)
> create mode 100644 conf/vf_perf.cfg
> create mode 100644 test_plans/vf_perf_test_plan.rst
> create mode 100644 tests/TestSuite_vf_perf.py
>
> diff --git a/conf/vf_perf.cfg b/conf/vf_perf.cfg
> new file mode 100644
> index 0000000..36fac55
> --- /dev/null
> +++ b/conf/vf_perf.cfg
> @@ -0,0 +1,107 @@
> +# QEMU options
> +# name
> +# name: vm0
> +#
> +# enable_kvm
> +# enable: [yes | no]
> +#
> +# cpu
> +# model: [host | core2duo | ...]
> +# usage:
> +# choose model value from the command
> +# qemu-system-x86_64 -cpu help
> +# number: '4' #number of vcpus
> +# cpupin: '3 4 5 6' # host cpu list
> +#
> +# mem
> +# size: 1024
> +#
> +# disk
> +# file: /path/to/image/test.img
> +#
> +# net
> +# type: [nic | user | tap | bridge | ...]
> +# nic
> +# opt_vlan: 0
> +# note: Default is 0.
> +# opt_macaddr: 00:00:00:00:01:01
> +# note: if creating a nic, it`s better to specify a MAC,
> +# else it will get a random number.
> +# opt_model:["e1000" | "virtio" | "i82551" | ...]
> +# note: Default is e1000.
> +# opt_name: 'nic1'
> +# opt_addr: ''
> +# note: PCI cards only.
> +# opt_vectors:
> +# note: This option currently only affects virtio cards.
> +# user
> +# opt_vlan: 0
> +# note: default is 0.
> +# opt_hostfwd: [tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
> +# note: If not specified, it will be setted automatically.
> +# tap
> +# opt_vlan: 0
> +# note: default is 0.
> +# opt_br: br0
> +# note: if choosing tap, need to specify bridge name,
> +# else it will be br0.
> +# opt_script: QEMU_IFUP_PATH
> +# note: if not specified, default is self.QEMU_IFUP_PATH.
> +# opt_downscript: QEMU_IFDOWN_PATH
> +# note: if not specified, default is self.QEMU_IFDOWN_PATH.
> +#
> +# device
> +# driver: [pci-assign | virtio-net-pci | ...]
> +# pci-assign
> +# prop_host: 08:00.0
> +# prop_addr: 00:00:00:00:01:02
> +# virtio-net-pci
> +# prop_netdev: mynet1
> +# prop_id: net1
> +# prop_mac: 00:00:00:00:01:03
> +# prop_bus: pci.0
> +# prop_addr: 0x3
> +#
> +# monitor
> +# port: 6061
> +# note: if adding monitor to vm, need to specicy
> +# this port, else it will get a free port
> +# on the host machine.
> +#
> +# qga
> +# enable: [yes | no]
> +#
> +# serial_port
> +# enable: [yes | no]
> +#
> +# vnc
> +# displayNum: 1
> +# note: you can choose a number not used on the host.
> +#
> +# daemon
> +# enable: 'yes'
> +# note:
> +# By default VM will start with the daemonize status.
> +# Not support starting it on the stdin now.
> +
> +# vm configuration for pmd sriov case
> +[vm0]
> +cpu =
> + model=host,number=4,cpupin=5 6 7 8;
For better performance, cpupin should match the socket with NIC.
Suggest to remove core setting and dynamically generate it in test case .
> +disk =
> + file=/home/image/sriov-fc20-1.img;
> +login =
> + user=root,password=tester;
> +net =
> + type=nic,opt_vlan=0;
> + type=user,opt_vlan=0;
> +monitor =
> + port=;
> +qga =
> + enable=yes;
> +vnc =
> + displayNum=4;
> +daemon =
> + enable=yes;
> +qemu =
> + path=/usr/local/qemu-2.4.0/x86_64-softmmu/qemu-system-x86_64;
> diff --git a/test_plans/vf_perf_test_plan.rst b/test_plans/vf_perf_test_plan.rst
> new file mode 100644
> index 0000000..53238ce
> --- /dev/null
> +++ b/test_plans/vf_perf_test_plan.rst
> @@ -0,0 +1,182 @@
> +.. Copyright (c) <2015>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +Test Case 1: Measure packet loss with kernel PF & dpdk VF
> +===================================
> +
> +1. got the pci device id of DUT, for example,
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
> +
> +2. create 2 VFs from 1 PF,
> +
> +echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
> +0000:81:02.0 'XL710/X710 Virtual Function' unused=
> +0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> +3. detach VFs from the host, bind them to pci-stub driver,
> +
> +virsh nodedev-detach pci_0000_81_02_0;
> +virsh nodedev-detach pci_0000_81_02_1;
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +
> +4. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
> +
> +/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
> +-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
> +-device pci-assign,host=81:02.0,id=pt_0 \
> +-device pci-assign,host=81:02.1,id=pt_1
> +
When start vm, it's better to use hugepage backend memory. It will be
helpful in performance.
> +5. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
> +and then start testpmd, set it in mac forward mode,
> +
> +./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
> +
This is only one queue for tx/rx, we need cover multiple queues performance.
> +testpmd> set fwd mac
> +testpmd> start
> +
> +6. using ixia traffic generator to send 64 bytes packet with 10% line rate to VF, verify packet loss < 0.0001.
> +
I think we also need to measure the throughput with zero packet loss.
Like RFC2544 function in l3fwd, we can get the throughput for 0.0001 and
zero packet loss rate.
> +7. using ixia traffic generator to send 64 bytes packet with 100% line rate to VF, verify packet loss < 0.0001.
> +
> +Test Case 2: Measure performace with kernel PF & dpdk VF
> +========================================================
> +
> +1. setup test environment as Test Case 1, step 1-5.
> +
> +2. Measure maximum RFC2544 performance throughput for the following packet sizes,
> +
> +frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> +The output format should be as below, with figures given in mpps:
> +
> ++------------+--------+
> +| Size\Cores |1S/4C/1T|
> ++------------+--------+
> +| 64-byte | |
> ++------------+--------+
> +| 128-byte | |
> ++------------+--------+
> +| 256-byte | |
> ++------------+--------+
> +| 512-byte | |
> ++------------+--------+
> +| 1024-byte | |
> ++------------+--------+
> +| 1280-byte | |
> ++------------+--------+
> +| 1518-byte | |
> ++------------+--------+
> +
> +
> +Test Case 3: Measure performace with dpdk PF & dpdk VF
> +======================================================
Not sure whether we need to cover this, please check with stakeholder.
> +
> +1. got the pci device id of DUT and bind it to igb_uio driver, for example,
> +
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
> +
> +./dpdk_nic_bind.py --bind=igb_uio 81:00.0
> +
> +2. create 2 VFs from 1 PF,
> +
> +echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/max_vfs
> +./dpdk_nic_bind.py --st
> +
> +0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
> +0000:81:02.0 'XL710/X710 Virtual Function' unused=
> +0000:81:02.1 'XL710/X710 Virtual Function' unused=
> +
> +3. detach VFs from the host, bind them to pci-stub driver,
> +
> +./dpdk_nic_bind.py --bind=pci-stub 81:02.0 81:02.1
> +./dpdk_nic_bind.py --st
> +
> +0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
> +
> +4. bind PF 81:00.0 to testpmd and start it on the host,
> +
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 81:00.0 -- -i --portmask=0x1 --txqflags=0
> +
> +5. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
> +
> +/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
> +-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
> +-device pci-assign,host=81:02.0,id=pt_0 \
> +-device pci-assign,host=81:02.1,id=pt_1
> +
> +6. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
> +and then start testpmd, set it in mac forward mode,
> +
> +./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
> +./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
> +
> +testpmd> set fwd mac
> +testpmd> start
> +
> +7. Measure maximum RFC2544 performance throughput for the following packet sizes,
> +
> +frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> +The output format should be as below, with figures given in mpps:
> +
> ++------------+--------+
> +| Size\Cores |1S/4C/1T|
> ++------------+--------+
> +| 64-byte | |
> ++------------+--------+
> +| 128-byte | |
> ++------------+--------+
> +| 256-byte | |
> ++------------+--------+
> +| 512-byte | |
> ++------------+--------+
> +| 1024-byte | |
> ++------------+--------+
> +| 1280-byte | |
> ++------------+--------+
> +| 1518-byte | |
> ++------------+--------+
> +
> diff --git a/tests/TestSuite_vf_perf.py b/tests/TestSuite_vf_perf.py
> new file mode 100644
> index 0000000..2fccc59
> --- /dev/null
> +++ b/tests/TestSuite_vf_perf.py
> @@ -0,0 +1,222 @@
> +# <COPYRIGHT_TAG>
> +
> +import re
> +import time
> +
> +import dts
> +from qemu_kvm import QEMUKvm
> +from test_case import TestCase
> +from pmd_output import PmdOutput
> +from etgen import IxiaPacketGenerator
> +
> +VM_CORES_MASK = 'all'
> +
> +class TestVfPerf(TestCase, IxiaPacketGenerator):
> +
> + def set_up_all(self):
> +
> + self.tester.extend_external_packet_generator(TestVfPerf, self)
> +
> + self.dut_ports = self.dut.get_ports(self.nic)
> + self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
> +
> + self.core_configs = []
> + self.core_configs.append({'cores': '1S/4C/1T', 'pps': {}})
> +
> + self.vm0 = None
> + self.vf0_mac = "00:12:34:56:78:01"
> +
> +
> + def set_up(self):
> +
> + self.setup_2vf_1vm_env_flag = 0
> +
> + def setup_2vf_1vm_env(self, driver='default'):
> +
> + self.used_dut_port = self.dut_ports[0]
> + self.dut.generate_sriov_vfs_by_port(self.used_dut_port, 2, driver=driver)
> + self.sriov_vfs_port = self.dut.ports_info[self.used_dut_port]['vfs_port']
> +
> + if driver != 'igb_uio':
> + pf_itf = self.dut.ports_info[0]['port'].get_interface_name()
> + self.dut.send_expect("ip link set %s vf 0 mac %s" %(pf_itf, self.vf0_mac), "#")
> +
> + try:
> +
> + for port in self.sriov_vfs_port:
> + print port.pci
> + port.bind_driver('pci-stub')
> +
> + time.sleep(1)
> + vf0_prop = {'opt_host': self.sriov_vfs_port[0].pci}
> + vf1_prop = {'opt_host': self.sriov_vfs_port[1].pci}
> +
> + for port_id in self.dut_ports:
> + if port_id == self.used_dut_port:
> + continue
> + port = self.dut.ports_info[port_id]['port']
> + port.bind_driver()
> +
> + if driver == 'igb_uio':
> + self.host_testpmd = PmdOutput(self.dut)
> + eal_param = '-b %(vf0)s -b %(vf1)s' % {'vf0': self.sriov_vfs_port[0].pci,
> + 'vf1': self.sriov_vfs_port[1].pci}
> + self.host_testpmd.start_testpmd("1S/2C/2T", eal_param=eal_param)
> +
> + # set up VM0 ENV
> + self.vm0 = QEMUKvm(self.dut, 'vm0', 'vf_perf')
> + self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
> + self.vm0.set_vm_device(driver='pci-assign', **vf1_prop)
> + self.vm_dut_0 = self.vm0.start()
> + if self.vm_dut_0 is None:
> + raise Exception("Set up VM0 ENV failed!")
> +
> + self.setup_2vf_1vm_env_flag = 1
> + except Exception as e:
> + self.destroy_2vf_1vm_env()
> + raise Exception(e)
> +
> + def destroy_2vf_1vm_env(self):
> +
> + if getattr(self, 'vm0_testpmd', None):
> + self.vm0_testpmd.execute_cmd('stop')
> + self.vm0_testpmd.execute_cmd('quit', '# ')
> + self.vm0_testpmd = None
> +
> + if getattr(self, 'vm0', None):
> + self.vm0_dut_ports = None
> + self.vm_dut_0 = None
> + self.vm0.stop()
> + self.vm0 = None
> +
> + if getattr(self, 'host_testpmd', None):
> + self.host_testpmd.execute_cmd('quit', '# ')
> + self.host_testpmd = None
> +
> + if getattr(self, 'used_dut_port', None):
> + self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
> + port = self.dut.ports_info[self.used_dut_port]['port']
> + port.bind_driver()
> + self.used_dut_port = None
> +
> + for port_id in self.dut_ports:
> + self.dut.destroy_sriov_vfs_by_port(port_id)
> + port = self.dut.ports_info[port_id]['port']
> + port.bind_driver()
> +
> + self.setup_2vf_1vm_env_flag = 0
> +
> + def test_perf_kernel_pf_dpdk_vf_packet_loss(self):
> +
> + self.setup_2vf_1vm_env(driver='')
> +
> + self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
> + port_id_0 = 0
> + self.vm0_testpmd = PmdOutput(self.vm_dut_0)
> + self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
> + self.vm0_testpmd.execute_cmd('show port info all')
> + pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
> + self.vm0_testpmd.execute_cmd('set fwd mac')
> + self.vm0_testpmd.execute_cmd('start')
> +
> + time.sleep(2)
> +
> + tx_port = self.tester.get_local_port(self.dut_ports[0])
> + rx_port = tx_port
> + dst_mac = pmd0_vf0_mac
> + src_mac = self.tester.get_mac(tx_port)
> +
> + self.tester.scapy_append('dmac="%s"' % dst_mac)
> + self.tester.scapy_append('smac="%s"' % src_mac)
> + self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/IP(len=46)/UDP(len=26)/("X"*18)]')
> + self.tester.scapy_append('wrpcap("test.pcap", flows)')
> + self.tester.scapy_execute()
> +
> + loss, _, _ = self.tester.traffic_generator_loss([(tx_port, rx_port, "test.pcap")], 10, delay=180)
> +
> + self.verify(loss < 0.0001, "Excessive packet loss when sending 64 bytes packet with 10% line rate")
> +
> + loss, _, _ = self.tester.traffic_generator_loss([(tx_port, rx_port, "test.pcap")], 100, delay=180)
> +
> + self.verify(loss < 0.0001, "Excessive packet loss when sending 64 bytes packet with 100% line rate")
> +
> +
> + def measure_vf_performance(self, driver='default'):
> +
> + if driver == 'igb_uio':
> + self.setup_2vf_1vm_env(driver='igb_uio')
> + else:
> + self.setup_2vf_1vm_env(driver='')
> +
> + if driver == 'igb_uio':
> + self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
> + port_id_0 = 0
> + self.vm0_testpmd = PmdOutput(self.vm_dut_0)
> + self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
> + self.vm0_testpmd.execute_cmd('show port info all')
> + self.vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
> + self.vm0_testpmd.execute_cmd('stop')
> + self.vm0_testpmd.execute_cmd('quit', '# ')
> + self.vm0_testpmd = None
> +
> + out = self.vm_dut_0.build_dpdk_apps('examples/l3fwd')
> + self.verify("Error" not in out, "compilation error 1")
> + self.verify("No such file" not in out, "compilation error 2")
> +
> + cmdline = "./examples/l3fwd/build/l3fwd -c 0xf -n 4 -- -p 0x3 --config '(0,0,0),(0,1,2),(1,0,1),(1,1,3)' "
> + self.vm_dut_0.send_expect(cmdline, "L3FWD:", 120)
> +
> + time.sleep(10)
> +
> + frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
> +
> + for config in self.core_configs:
> + self.dut.kill_all()
> + cores = self.dut.get_core_list(config['cores'])
> +
> + tx_port = self.tester.get_local_port(self.dut_ports[0])
> + rx_port = tx_port
> + dst_mac = self.vf0_mac
> + src_mac = self.tester.get_mac(tx_port)
> +
> + global size
> + for size in frameSizes:
> + self.tester.scapy_append('dmac="%s"' % dst_mac)
> + self.tester.scapy_append('smac="%s"' % src_mac)
> + self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/IP(src="1.2.3.4",dst="11.100.0.1")/UDP()/("X"*%d)]' % (size - 46))
> + self.tester.scapy_append('wrpcap("test.pcap", flows)')
> + self.tester.scapy_execute()
> + tgenInput = []
> + tgenInput.append((tx_port, rx_port, "test.pcap"))
> + _, pps = self.tester.traffic_generator_throughput(tgenInput)
> + config['pps'][size] = pps
> +
> + #Stop l3fwd
> + self.vm_dut_0.send_expect("^C", "#")
> +
> + for n in range(len(self.core_configs)):
> + for size in frameSizes:
> + self.verify(
> + self.core_configs[n]['pps'][size] is not 0, "No traffic detected")
> +
> + # Print results
> + dts.results_table_add_header(['Frame size'] + [n['cores'] for n in self.core_configs])
> + for size in frameSizes:
> + dts.results_table_add_row([size] + [n['pps'][size] for n in self.core_configs])
> + dts.results_table_print()
> +
> + def test_perf_kernel_pf_dpdk_vf_performance(self):
> +
> + self.measure_vf_performance(driver='')
> +
> + def test_perf_dpdk_pf_dpdk_vf_performance(self):
> +
> + self.measure_vf_performance(driver='igb_uio')
> +
> + def tear_down(self):
> +
> + if self.setup_2vf_1vm_env_flag == 1:
> + self.destroy_2vf_1vm_env()
> +
> + def tear_down_all(self):
> + pass
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dts] [PATCH] tests: Add VF packet drop and performance test
@ 2016-02-29 8:48 Yulong Pei
2016-03-02 5:39 ` Liu, Yong
0 siblings, 1 reply; 4+ messages in thread
From: Yulong Pei @ 2016-02-29 8:48 UTC (permalink / raw)
To: dts
1.vf_perf.cfg: vm setting and qemu parameters.
2.vf_perf_test_plan.rst: test plan, describe test cases.
3.TestSuite_vf_perf.py: implement test cases according to the test plan.
4.using l3fwd to measure performance instead of testpmd.
Signed-off-by: Yulong Pei <yulong.pei@intel.com>
---
conf/vf_perf.cfg | 107 +++++++++++++++++++
test_plans/vf_perf_test_plan.rst | 182 ++++++++++++++++++++++++++++++++
tests/TestSuite_vf_perf.py | 222 +++++++++++++++++++++++++++++++++++++++
3 files changed, 511 insertions(+)
create mode 100644 conf/vf_perf.cfg
create mode 100644 test_plans/vf_perf_test_plan.rst
create mode 100644 tests/TestSuite_vf_perf.py
diff --git a/conf/vf_perf.cfg b/conf/vf_perf.cfg
new file mode 100644
index 0000000..36fac55
--- /dev/null
+++ b/conf/vf_perf.cfg
@@ -0,0 +1,107 @@
+# QEMU options
+# name
+# name: vm0
+#
+# enable_kvm
+# enable: [yes | no]
+#
+# cpu
+# model: [host | core2duo | ...]
+# usage:
+# choose model value from the command
+# qemu-system-x86_64 -cpu help
+# number: '4' #number of vcpus
+# cpupin: '3 4 5 6' # host cpu list
+#
+# mem
+# size: 1024
+#
+# disk
+# file: /path/to/image/test.img
+#
+# net
+# type: [nic | user | tap | bridge | ...]
+# nic
+# opt_vlan: 0
+# note: Default is 0.
+# opt_macaddr: 00:00:00:00:01:01
+# note: if creating a nic, it`s better to specify a MAC,
+# else it will get a random number.
+# opt_model:["e1000" | "virtio" | "i82551" | ...]
+# note: Default is e1000.
+# opt_name: 'nic1'
+# opt_addr: ''
+# note: PCI cards only.
+# opt_vectors:
+# note: This option currently only affects virtio cards.
+# user
+# opt_vlan: 0
+# note: default is 0.
+# opt_hostfwd: [tcp|udp]:[hostaddr]:hostport-[guestaddr]:guestport
+# note: If not specified, it will be setted automatically.
+# tap
+# opt_vlan: 0
+# note: default is 0.
+# opt_br: br0
+# note: if choosing tap, need to specify bridge name,
+# else it will be br0.
+# opt_script: QEMU_IFUP_PATH
+# note: if not specified, default is self.QEMU_IFUP_PATH.
+# opt_downscript: QEMU_IFDOWN_PATH
+# note: if not specified, default is self.QEMU_IFDOWN_PATH.
+#
+# device
+# driver: [pci-assign | virtio-net-pci | ...]
+# pci-assign
+# prop_host: 08:00.0
+# prop_addr: 00:00:00:00:01:02
+# virtio-net-pci
+# prop_netdev: mynet1
+# prop_id: net1
+# prop_mac: 00:00:00:00:01:03
+# prop_bus: pci.0
+# prop_addr: 0x3
+#
+# monitor
+# port: 6061
+# note: if adding monitor to vm, need to specicy
+# this port, else it will get a free port
+# on the host machine.
+#
+# qga
+# enable: [yes | no]
+#
+# serial_port
+# enable: [yes | no]
+#
+# vnc
+# displayNum: 1
+# note: you can choose a number not used on the host.
+#
+# daemon
+# enable: 'yes'
+# note:
+# By default VM will start with the daemonize status.
+# Not support starting it on the stdin now.
+
+# vm configuration for pmd sriov case
+[vm0]
+cpu =
+ model=host,number=4,cpupin=5 6 7 8;
+disk =
+ file=/home/image/sriov-fc20-1.img;
+login =
+ user=root,password=tester;
+net =
+ type=nic,opt_vlan=0;
+ type=user,opt_vlan=0;
+monitor =
+ port=;
+qga =
+ enable=yes;
+vnc =
+ displayNum=4;
+daemon =
+ enable=yes;
+qemu =
+ path=/usr/local/qemu-2.4.0/x86_64-softmmu/qemu-system-x86_64;
diff --git a/test_plans/vf_perf_test_plan.rst b/test_plans/vf_perf_test_plan.rst
new file mode 100644
index 0000000..53238ce
--- /dev/null
+++ b/test_plans/vf_perf_test_plan.rst
@@ -0,0 +1,182 @@
+.. Copyright (c) <2015>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Test Case 1: Measure packet loss with kernel PF & dpdk VF
+===================================
+
+1. got the pci device id of DUT, for example,
+
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
+
+2. create 2 VFs from 1 PF,
+
+echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
+0000:81:02.0 'XL710/X710 Virtual Function' unused=
+0000:81:02.1 'XL710/X710 Virtual Function' unused=
+
+3. detach VFs from the host, bind them to pci-stub driver,
+
+virsh nodedev-detach pci_0000_81_02_0;
+virsh nodedev-detach pci_0000_81_02_1;
+
+./dpdk_nic_bind.py --st
+
+0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+
+4. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
+
+/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
+-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
+-device pci-assign,host=81:02.0,id=pt_0 \
+-device pci-assign,host=81:02.1,id=pt_1
+
+5. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
+and then start testpmd, set it in mac forward mode,
+
+./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
+./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
+
+testpmd> set fwd mac
+testpmd> start
+
+6. using ixia traffic generator to send 64 bytes packet with 10% line rate to VF, verify packet loss < 0.0001.
+
+7. using ixia traffic generator to send 64 bytes packet with 100% line rate to VF, verify packet loss < 0.0001.
+
+Test Case 2: Measure performace with kernel PF & dpdk VF
+========================================================
+
+1. setup test environment as Test Case 1, step 1-5.
+
+2. Measure maximum RFC2544 performance throughput for the following packet sizes,
+
+frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
+
+The output format should be as below, with figures given in mpps:
+
++------------+--------+
+| Size\Cores |1S/4C/1T|
++------------+--------+
+| 64-byte | |
++------------+--------+
+| 128-byte | |
++------------+--------+
+| 256-byte | |
++------------+--------+
+| 512-byte | |
++------------+--------+
+| 1024-byte | |
++------------+--------+
+| 1280-byte | |
++------------+--------+
+| 1518-byte | |
++------------+--------+
+
+
+Test Case 3: Measure performace with dpdk PF & dpdk VF
+======================================================
+
+1. got the pci device id of DUT and bind it to igb_uio driver, for example,
+
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
+
+./dpdk_nic_bind.py --bind=igb_uio 81:00.0
+
+2. create 2 VFs from 1 PF,
+
+echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/max_vfs
+./dpdk_nic_bind.py --st
+
+0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=igb_uio
+0000:81:02.0 'XL710/X710 Virtual Function' unused=
+0000:81:02.1 'XL710/X710 Virtual Function' unused=
+
+3. detach VFs from the host, bind them to pci-stub driver,
+
+./dpdk_nic_bind.py --bind=pci-stub 81:02.0 81:02.1
+./dpdk_nic_bind.py --st
+
+0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=igb_uio
+
+4. bind PF 81:00.0 to testpmd and start it on the host,
+
+./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 81:00.0 -- -i --portmask=0x1 --txqflags=0
+
+5. passthrough VFs 81:02.0 & 81:02.1 to vm0, start vm0,
+
+/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \
+-cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
+-device pci-assign,host=81:02.0,id=pt_0 \
+-device pci-assign,host=81:02.1,id=pt_1
+
+6. login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver,
+and then start testpmd, set it in mac forward mode,
+
+./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
+./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --txqflags=0
+
+testpmd> set fwd mac
+testpmd> start
+
+7. Measure maximum RFC2544 performance throughput for the following packet sizes,
+
+frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
+
+The output format should be as below, with figures given in mpps:
+
++------------+--------+
+| Size\Cores |1S/4C/1T|
++------------+--------+
+| 64-byte | |
++------------+--------+
+| 128-byte | |
++------------+--------+
+| 256-byte | |
++------------+--------+
+| 512-byte | |
++------------+--------+
+| 1024-byte | |
++------------+--------+
+| 1280-byte | |
++------------+--------+
+| 1518-byte | |
++------------+--------+
+
diff --git a/tests/TestSuite_vf_perf.py b/tests/TestSuite_vf_perf.py
new file mode 100644
index 0000000..2fccc59
--- /dev/null
+++ b/tests/TestSuite_vf_perf.py
@@ -0,0 +1,222 @@
+# <COPYRIGHT_TAG>
+
+import re
+import time
+
+import dts
+from qemu_kvm import QEMUKvm
+from test_case import TestCase
+from pmd_output import PmdOutput
+from etgen import IxiaPacketGenerator
+
+VM_CORES_MASK = 'all'
+
+class TestVfPerf(TestCase, IxiaPacketGenerator):
+
+ def set_up_all(self):
+
+ self.tester.extend_external_packet_generator(TestVfPerf, self)
+
+ self.dut_ports = self.dut.get_ports(self.nic)
+ self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
+
+ self.core_configs = []
+ self.core_configs.append({'cores': '1S/4C/1T', 'pps': {}})
+
+ self.vm0 = None
+ self.vf0_mac = "00:12:34:56:78:01"
+
+
+ def set_up(self):
+
+ self.setup_2vf_1vm_env_flag = 0
+
+ def setup_2vf_1vm_env(self, driver='default'):
+
+ self.used_dut_port = self.dut_ports[0]
+ self.dut.generate_sriov_vfs_by_port(self.used_dut_port, 2, driver=driver)
+ self.sriov_vfs_port = self.dut.ports_info[self.used_dut_port]['vfs_port']
+
+ if driver != 'igb_uio':
+ pf_itf = self.dut.ports_info[0]['port'].get_interface_name()
+ self.dut.send_expect("ip link set %s vf 0 mac %s" %(pf_itf, self.vf0_mac), "#")
+
+ try:
+
+ for port in self.sriov_vfs_port:
+ print port.pci
+ port.bind_driver('pci-stub')
+
+ time.sleep(1)
+ vf0_prop = {'opt_host': self.sriov_vfs_port[0].pci}
+ vf1_prop = {'opt_host': self.sriov_vfs_port[1].pci}
+
+ for port_id in self.dut_ports:
+ if port_id == self.used_dut_port:
+ continue
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver()
+
+ if driver == 'igb_uio':
+ self.host_testpmd = PmdOutput(self.dut)
+ eal_param = '-b %(vf0)s -b %(vf1)s' % {'vf0': self.sriov_vfs_port[0].pci,
+ 'vf1': self.sriov_vfs_port[1].pci}
+ self.host_testpmd.start_testpmd("1S/2C/2T", eal_param=eal_param)
+
+ # set up VM0 ENV
+ self.vm0 = QEMUKvm(self.dut, 'vm0', 'vf_perf')
+ self.vm0.set_vm_device(driver='pci-assign', **vf0_prop)
+ self.vm0.set_vm_device(driver='pci-assign', **vf1_prop)
+ self.vm_dut_0 = self.vm0.start()
+ if self.vm_dut_0 is None:
+ raise Exception("Set up VM0 ENV failed!")
+
+ self.setup_2vf_1vm_env_flag = 1
+ except Exception as e:
+ self.destroy_2vf_1vm_env()
+ raise Exception(e)
+
+ def destroy_2vf_1vm_env(self):
+
+ if getattr(self, 'vm0_testpmd', None):
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm0_testpmd.execute_cmd('quit', '# ')
+ self.vm0_testpmd = None
+
+ if getattr(self, 'vm0', None):
+ self.vm0_dut_ports = None
+ self.vm_dut_0 = None
+ self.vm0.stop()
+ self.vm0 = None
+
+ if getattr(self, 'host_testpmd', None):
+ self.host_testpmd.execute_cmd('quit', '# ')
+ self.host_testpmd = None
+
+ if getattr(self, 'used_dut_port', None):
+ self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
+ port = self.dut.ports_info[self.used_dut_port]['port']
+ port.bind_driver()
+ self.used_dut_port = None
+
+ for port_id in self.dut_ports:
+ self.dut.destroy_sriov_vfs_by_port(port_id)
+ port = self.dut.ports_info[port_id]['port']
+ port.bind_driver()
+
+ self.setup_2vf_1vm_env_flag = 0
+
+ def test_perf_kernel_pf_dpdk_vf_packet_loss(self):
+
+ self.setup_2vf_1vm_env(driver='')
+
+ self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
+ port_id_0 = 0
+ self.vm0_testpmd = PmdOutput(self.vm_dut_0)
+ self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
+ self.vm0_testpmd.execute_cmd('show port info all')
+ pmd0_vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
+ self.vm0_testpmd.execute_cmd('set fwd mac')
+ self.vm0_testpmd.execute_cmd('start')
+
+ time.sleep(2)
+
+ tx_port = self.tester.get_local_port(self.dut_ports[0])
+ rx_port = tx_port
+ dst_mac = pmd0_vf0_mac
+ src_mac = self.tester.get_mac(tx_port)
+
+ self.tester.scapy_append('dmac="%s"' % dst_mac)
+ self.tester.scapy_append('smac="%s"' % src_mac)
+ self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/IP(len=46)/UDP(len=26)/("X"*18)]')
+ self.tester.scapy_append('wrpcap("test.pcap", flows)')
+ self.tester.scapy_execute()
+
+ loss, _, _ = self.tester.traffic_generator_loss([(tx_port, rx_port, "test.pcap")], 10, delay=180)
+
+ self.verify(loss < 0.0001, "Excessive packet loss when sending 64 bytes packet with 10% line rate")
+
+ loss, _, _ = self.tester.traffic_generator_loss([(tx_port, rx_port, "test.pcap")], 100, delay=180)
+
+ self.verify(loss < 0.0001, "Excessive packet loss when sending 64 bytes packet with 100% line rate")
+
+
+ def measure_vf_performance(self, driver='default'):
+
+ if driver == 'igb_uio':
+ self.setup_2vf_1vm_env(driver='igb_uio')
+ else:
+ self.setup_2vf_1vm_env(driver='')
+
+ if driver == 'igb_uio':
+ self.vm0_dut_ports = self.vm_dut_0.get_ports('any')
+ port_id_0 = 0
+ self.vm0_testpmd = PmdOutput(self.vm_dut_0)
+ self.vm0_testpmd.start_testpmd(VM_CORES_MASK)
+ self.vm0_testpmd.execute_cmd('show port info all')
+ self.vf0_mac = self.vm0_testpmd.get_port_mac(port_id_0)
+ self.vm0_testpmd.execute_cmd('stop')
+ self.vm0_testpmd.execute_cmd('quit', '# ')
+ self.vm0_testpmd = None
+
+ out = self.vm_dut_0.build_dpdk_apps('examples/l3fwd')
+ self.verify("Error" not in out, "compilation error 1")
+ self.verify("No such file" not in out, "compilation error 2")
+
+ cmdline = "./examples/l3fwd/build/l3fwd -c 0xf -n 4 -- -p 0x3 --config '(0,0,0),(0,1,2),(1,0,1),(1,1,3)' "
+ self.vm_dut_0.send_expect(cmdline, "L3FWD:", 120)
+
+ time.sleep(10)
+
+ frameSizes = [64, 128, 256, 512, 1024, 1280, 1518]
+
+ for config in self.core_configs:
+ self.dut.kill_all()
+ cores = self.dut.get_core_list(config['cores'])
+
+ tx_port = self.tester.get_local_port(self.dut_ports[0])
+ rx_port = tx_port
+ dst_mac = self.vf0_mac
+ src_mac = self.tester.get_mac(tx_port)
+
+ global size
+ for size in frameSizes:
+ self.tester.scapy_append('dmac="%s"' % dst_mac)
+ self.tester.scapy_append('smac="%s"' % src_mac)
+ self.tester.scapy_append('flows = [Ether(src=smac, dst=dmac)/IP(src="1.2.3.4",dst="11.100.0.1")/UDP()/("X"*%d)]' % (size - 46))
+ self.tester.scapy_append('wrpcap("test.pcap", flows)')
+ self.tester.scapy_execute()
+ tgenInput = []
+ tgenInput.append((tx_port, rx_port, "test.pcap"))
+ _, pps = self.tester.traffic_generator_throughput(tgenInput)
+ config['pps'][size] = pps
+
+ #Stop l3fwd
+ self.vm_dut_0.send_expect("^C", "#")
+
+ for n in range(len(self.core_configs)):
+ for size in frameSizes:
+ self.verify(
+ self.core_configs[n]['pps'][size] is not 0, "No traffic detected")
+
+ # Print results
+ dts.results_table_add_header(['Frame size'] + [n['cores'] for n in self.core_configs])
+ for size in frameSizes:
+ dts.results_table_add_row([size] + [n['pps'][size] for n in self.core_configs])
+ dts.results_table_print()
+
+ def test_perf_kernel_pf_dpdk_vf_performance(self):
+
+ self.measure_vf_performance(driver='')
+
+ def test_perf_dpdk_pf_dpdk_vf_performance(self):
+
+ self.measure_vf_performance(driver='igb_uio')
+
+ def tear_down(self):
+
+ if self.setup_2vf_1vm_env_flag == 1:
+ self.destroy_2vf_1vm_env()
+
+ def tear_down_all(self):
+ pass
--
2.1.0
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-03-02 5:37 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-19 8:07 [dts] [PATCH] tests: Add VF packet drop and performance test Yulong Pei
2016-02-01 8:36 ` Liu, Yong
2016-02-29 8:48 Yulong Pei
2016-03-02 5:39 ` Liu, Yong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).