* [dts] [PATCH 2/3] Add test plan for vm power management feature
2015-07-09 3:36 [dts] [PATCH 0/3] Add vm power management feature support Yong Liu
2015-07-09 3:36 ` [dts] [PATCH 1/3] Add VM configuration file for vm power management feature Yong Liu
@ 2015-07-09 3:36 ` Yong Liu
2015-07-09 3:36 ` [dts] [PATCH 3/3] Add test suite " Yong Liu
2 siblings, 0 replies; 4+ messages in thread
From: Yong Liu @ 2015-07-09 3:36 UTC (permalink / raw)
To: dts
From: Marvin Liu <yong.liu@intel.com>
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/test_plans/vm_power_manager_test_plan.rst b/test_plans/vm_power_manager_test_plan.rst
new file mode 100644
index 0000000..dff8197
--- /dev/null
+++ b/test_plans/vm_power_manager_test_plan.rst
@@ -0,0 +1,304 @@
+.. Copyright (c) <2015>, Intel Corporation
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ - Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+
+ - Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+
+ - Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ OF THE POSSIBILITY OF SUCH DAMAGE.
+
+===================
+VM Power Management
+===================
+
+This test plan is for the test and validation of feature VM Power Management
+of DPDK 1.8.
+
+VM Power Manager would use a hint based mechanism by which a VM can
+communicate to a host based governor about its current processing
+requirements. By mapping VMs virtual CPUs to physical CPUs the Power Manager
+can then make decisions according to some policy as to what power state the
+physical CPUs can transition to.
+
+VM Agent shall have the ability to send the following hints to host:
+- Scale frequency down
+- Scale frequency up
+- Reduce frequency to min
+- Increase frequency to max
+
+The Power manager is responsible for enabling the Linux userspace power
+governor and interacting via its sysfs entries to get/set frequencies.
+
+The power manager will manage the file handles for each core(<n>) below:
+- /sys/devices/system/cpu/cpu<n>/cpufreq/scaling_governor
+- /sys/devices/system/cpu/cpu<n>/cpufreq/scaling_available_frequencies
+- /sys/devices/system/cpu/cpu<n>/cpufreq/scaling_cur_freq
+- /sys/devices/system/cpu/cpu<n>/cpufreq/scaling_setspeed
+
+Prerequisites
+=============
+1. Hardware:
+ - CPU: Haswell, IVB(CrownPass)
+ - NIC: Niantic 82599
+
+2. BIOS:
+ - Enable VT-d and VT-x
+ - Enable Enhanced Intel SpeedStep(R) Tech
+ - Disable Intel(R) Turbo Boost Technology
+ - Enable Processor C3
+ - Enable Processor C6
+ - Enable Intel(R) Hyper-Threading Tech
+
+3. OS and Kernel:
+ - Fedora 20
+ - Enable Kernel features Huge page, UIO, IOMMU, KVM
+ - Enable Intel IOMMU in kernel commnand
+ - Disable Selinux
+
+3. Virtualization:
+ - QEMU emulator version 1.6.1
+ - libvirtd (libvirt) 1.1.3.5
+ - Add virio-serial port
+
+4. IXIA Traffic Generator Configuration
+ LPM table used for packet routing is:
+
+ +---------+------------------------+----+
+ | Entry # | LPM prefix (IP/length) | |
+ +---------+------------------------+----+
+ | 0 | 1.1.1.0/24 | P0 |
+ +---------+------------------------+----+
+ | 1 | 2.1.1.0/24 | P1 |
+ +---------+------------------------+----+
+
+
+ The flows should be configured and started by the traffic generator.
+
+ +------+---------+------------+---------+------+-------+--------+
+ | Flow | Traffic | IPv4 | IPv4 | Port | Port | L4 |
+ | | Gen. | Src. | Dst. | Src. | Dest. | Proto. |
+ | | Port | Address | Address | | | |
+ +------+---------+------------+---------+------+-------+--------+
+ | 1 | TG0 | 0.0.0.0 | 2.1.1.0 | any | any | UDP |
+ +------+---------+------------+---------+------+-------+--------+
+ | 2 | TG1 | 0.0.0.0 | 1.1.1.0 | any | any | UDP |
+ +------+---------+------------+---------+------+-------+--------+
+
+
+
+Test Case 1: VM Power Management Channel
+========================================
+1. Configure VM XML to pin VCPUs/CPUs:
+
+ <vcpu placement='static'>5</vcpu>
+ <cputune>
+ <vcpupin vcpu='0' cpuset='1'/>
+ <vcpupin vcpu='1' cpuset='2'/>
+ <vcpupin vcpu='2' cpuset='3'/>
+ <vcpupin vcpu='3' cpuset='4'/>
+ <vcpupin vcpu='4' cpuset='5'/>
+ </cputune>
+
+2. Configure VM XML to set up virtio serial ports
+
+ Create temporary folder for vm_power socket.
+
+ mkdir /tmp/powermonitor
+
+ Setup one serial port for every one vcpu in VM.
+
+ <channel type='unix'>
+ <source mode='bind' path='/tmp/powermonitor/<vm_name>.<channel_num>'/>
+ <target type='virtio' name='virtio.serial.port.poweragent.<channel_num>'/>
+ <address type='virtio-serial' controller='0' bus='0' port='4'/>
+ </channel>
+
+3. Run power-manager in Host
+
+ ./build/vm_power_mgr -c 0x3 -n 4
+
+4. Startup VM and run guest_vm_power_mgr
+
+ guest_vm_power_mgr -c 0x1f -n 4 -- -i
+5. Add vm in host and check vm_power_mgr can get frequency normally
+
+ vmpower> add_vm <vm_name>
+ vmpower> add_channels <vm_name> all
+ vmpower> show_cpu_freq <core_num>
+6. Check vcpu/cpu mapping can be detected normally
+
+ vmpower> show_vm <vm_name>
+ VM:
+ vCPU Refresh: 1
+ Channels 5
+ [0]: /tmp/powermonitor/<vm_name>.0, status = 1
+ [1]: /tmp/powermonitor/<vm_name>.1, status = 1
+ [2]: /tmp/powermonitor/<vm_name>.2, status = 1
+ [3]: /tmp/powermonitor/<vm_name>.3, status = 1
+ [4]: /tmp/powermonitor/<vm_name>.4, status = 1
+ Virtual CPU(s): 5
+ [0]: Physical CPU Mask 0x2
+ [1]: Physical CPU Mask 0x4
+ [2]: Physical CPU Mask 0x8
+ [3]: Physical CPU Mask 0x10
+ [4]: Physical CPU Mask 0x20
+
+7. Run vm_power_mgr in vm
+
+ guest_cli/build/vm_power_mgr -c 0x1f -n 4
+ Check monitor channel for all cores has been connected.
+
+Test Case 2: VM Power Management Numa
+=====================================
+1.Get core and socket information by cpu_layout
+
+ ./tools/cpu_layout.py
+2. Configure VM XML to pin VCPUs on Socket1:
+3. Repeat Case1 steps 3-7 sequentially
+4. Check vcpu/cpu mapping can be detected normally
+
+Test Case 3: VM Scale CPU Frequency Down
+========================================
+1. Setup VM power management environment
+2. Send cpu frequency down hints to Host
+
+ vmpower(guest)> set_cpu_freq 0 down
+3. Verify the frequency of physical CPU has been set down correctly
+
+ vmpower> show_cpu_freq 1
+ Core 1 frequency: 2700000
+
+4. Check other CPUs' frequency is not affected by change above
+5. check if the other VM works fine (if they use different CPUs)
+6. Repeat step2-5 several times
+
+
+Test Case 4: VM Scale CPU Frequency UP
+======================================
+1. Setup VM power management environment
+2. Send cpu frequency down hints to Host
+
+ vmpower(guest)> set_cpu_freq 0 up
+
+3. Verify the frequency of physical CPU has been set up correctly
+
+ vmpower> show_cpu_freq 1
+ Core 1 frequency: 2800000
+4. Check other CPUs' frequency is not affected by change above
+5. check if the other VM works fine (if they use different CPUs)
+6. Repeat step2-5 several times
+
+Test Case 5: VM Scale CPU Frequency to Min
+==========================================
+1. Setup VM power management environment
+2. Send cpu frequency scale to minimum hints.
+
+ vmpower(guest)> set_cpu_freq 0 min
+3. Verify the frequency of physical CPU has been scale to min correctly
+
+ vmpower> show_cpu_freq 1
+ Core 1 frequency: 1200000
+4. Check other CPUs' frequency is not affected by change above
+5. check if the other VM works fine (if they use different CPUs)
+
+Test Case 6: VM Scale CPU Frequency to Max
+==========================================
+1. Setup VM power management environment
+2. Send cpu frequency down hints to Host
+
+ vmpower(guest)> set_cpu_freq 0 max
+3. Verify the frequency of physical CPU has been set to max correctly
+
+ vmpower> show_cpu_freq 1
+ Core 1 frequency: 2800000
+4. Check other CPUs' frequency is not affected by change above
+5. check if the other VM works fine (if they use different CPUs)
+
+Test Case 7: VM Power Management Multi VMs
+==========================================
+1. Setup VM power management environment for VM1
+2. Setup VM power management environment for VM2
+3. Run power-manager in Host
+
+ ./build/vm_power_mgr -c 0x3 -n 4
+4. Startup VM1 and VM2
+5. Add VM1 in host and check vm_power_mgr can get frequency normally
+
+ vmpower> add_vm <vm1_name>
+ vmpower> add_channels <vm1_name> all
+ vmpower> show_cpu_freq <core_num>
+6. Add VM2 in host and check vm_power_mgr can get frequency normally
+
+ vmpower> add_vm <vm2_name>
+ vmpower> add_channels <vm2_name> all
+ vmpower> show_cpu_freq <core_num>
+7. Run Case3-6 and check VM1 and VM2 cpu frequency can by modified by guest_cli
+8. Poweroff VM2 and remove VM2 from host vm_power_mgr
+
+ vmpower> rm_vm <vm2_name>
+
+Test Case 8: VM l3fwd-power Latency
+===================================
+1. Connect two physical ports to IXIA
+2. Start VM and run l3fwd-power
+
+ l3fwd-power -c 6 -n 4 -- -p 0x3 --config
+ '(P0,0,C{1.1.0}),(P1,0,C{1.2.0})'
+
+3. Configure packet flow in IxiaNetwork
+4. Start to send packets from IXIA and check the receiving packets and latency
+5. Record the latency of frame sizes 128
+6. Compare latency value with sample l3fwd
+
+Test Case 9: VM l3fwd-power Performance
+=======================================
+Start VM and run l3fwd-power
+
+ l3fwd-power -c 6 -n 4 -- -p 0x3 --config
+ '(P0,0,C{1.1.0}),(P1,0,C{1.2.0})'
+
+Input traffic linerate varied from 0 to 100%, in order to see cpu frequency
+changes.
+
+The test report should provide the throughput rate measurements (in Mpps and %
+of the line rate for 2x NIC ports) and cpu frequency as listed in the table
+below:
+
+ +---------------+---------------+-----------+
+ | % Tx linerate | Rx % linerate | Cpu freq |
+ +---------------+---------------+-----------+
+ | 0 | | |
+ +---------------+---------------+-----------+
+ | 20 | | |
+ +---------------+---------------+-----------+
+ | 40 | | |
+ +---------------+---------------+-----------+
+ | 60 | | |
+ +---------------+---------------+-----------+
+ | 80 | | |
+ +---------------+---------------+-----------+
+ | 100 | | |
+ +---------------+---------------+-----------+
--
1.9.3
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dts] [PATCH 3/3] Add test suite for vm power management feature
2015-07-09 3:36 [dts] [PATCH 0/3] Add vm power management feature support Yong Liu
2015-07-09 3:36 ` [dts] [PATCH 1/3] Add VM configuration file for vm power management feature Yong Liu
2015-07-09 3:36 ` [dts] [PATCH 2/3] Add test plan " Yong Liu
@ 2015-07-09 3:36 ` Yong Liu
2 siblings, 0 replies; 4+ messages in thread
From: Yong Liu @ 2015-07-09 3:36 UTC (permalink / raw)
To: dts
From: Marvin Liu <yong.liu@intel.com>
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/tests/TestSuite_vm_power_manager.py b/tests/TestSuite_vm_power_manager.py
new file mode 100644
index 0000000..f06d245
--- /dev/null
+++ b/tests/TestSuite_vm_power_manager.py
@@ -0,0 +1,430 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""
+DPDK Test suite.
+VM power manager test suite.
+"""
+
+import re
+import dts
+from test_case import TestCase
+from etgen import IxiaPacketGenerator
+from settings import HEADER_SIZE
+from qemu_libvirt import LibvirtKvm
+
+
+class TestVmPowerManager(TestCase, IxiaPacketGenerator):
+
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+ """
+ self.dut_ports = self.dut.get_ports(self.nic)
+ self.verify(len(self.dut_ports) >= 2,
+ "Not enough ports for " + self.nic)
+
+ # create temporary folder for power monitor
+ self.dut.send_expect("mkdir -p /tmp/powermonitor", "# ")
+ self.dut.send_expect("chmod 777 /tmp/powermonitor", "# ")
+ # compile vm power manager
+ out = self.dut.build_dpdk_apps("./examples/vm_power_manager")
+ self.verify("Error" not in out, "Compilation error")
+ self.verify("No such" not in out, "Compilation error")
+
+ # map between host vcpu and guest vcpu
+ self.vcpu_map = []
+ # start vm
+ self.vm_name = "vm0"
+ self.vm = LibvirtKvm(self.dut, self.vm_name, self.suite)
+ channels = [
+ {'path': '/tmp/powermonitor/%s.0' %
+ self.vm_name, 'name': 'virtio.serial.port.poweragent.0'},
+ {'path': '/tmp/powermonitor/%s.1' %
+ self.vm_name, 'name': 'virtio.serial.port.poweragent.1'},
+ {'path': '/tmp/powermonitor/%s.2' %
+ self.vm_name, 'name': 'virtio.serial.port.poweragent.2'},
+ {'path': '/tmp/powermonitor/%s.3' %
+ self.vm_name, 'name': 'virtio.serial.port.poweragent.3'}
+ ]
+ for channel in channels:
+ self.vm.add_vm_virtio_serial_channel(**channel)
+
+ self.vm_dut = self.vm.start()
+
+ # ping cpus
+ cpus = self.vm.get_vm_cpu()
+ self.vcpu_map = cpus[:]
+ self.core_num = len(cpus)
+
+ # build guest cli
+ out = self.vm_dut.build_dpdk_apps(
+ "examples/vm_power_manager/guest_cli")
+ self.verify("Error" not in out, "Compilation error")
+ self.verify("No such" not in out, "Compilation error")
+
+ self.vm_power_dir = "./examples/vm_power_manager/"
+ mgr_cmd = self.vm_power_dir + "build/vm_power_mgr -c 0x3 -n 4"
+ out = self.dut.send_expect(mgr_cmd, "vmpower>", 120)
+ self.verify("Initialized successfully" in out,
+ "Power manager failed to initialized")
+ self.dut.send_expect("add_vm %s" % self.vm_name, "vmpower>")
+ self.dut.send_expect("add_channels %s all" % self.vm_name, "vmpower>")
+ vm_info = self.dut.send_expect("show_vm %s" % self.vm_name, "vmpower>")
+
+ # performance measure
+ self.frame_sizes = [128]
+ self.perf_rates = [0, 20, 40, 60, 80, 100]
+ self.def_framesize = 64
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def test_managment_channel(self):
+ """
+ Check power monitor channel connection
+ """
+ # check Channels and vcpus
+ guest_cmd = self.vm_power_dir + \
+ "guest_cli/build/guest_vm_power_mgr -c 0xf -n 4 -- -i"
+ out = self.vm_dut.send_expect(guest_cmd, "vmpower\(guest\)>", 120)
+ self.verify("now connected" in out,
+ "Power manager guest failed to connect")
+ self.vm_dut.send_expect("quit", "# ")
+
+ def get_cpu_frequency(self, core_id):
+ cpu_regex = ".*\nCore (\d+) frequency: (\d+)"
+ out = self.dut.send_expect("show_cpu_freq %s" % core_id, "vmpower>")
+ m = re.match(cpu_regex, out)
+ freq = -1
+ if m:
+ freq = int(m.group(2))
+
+ return freq
+
+ def test_vm_power_managment_freqdown(self):
+ """
+ Check host cpu frequency can scale down in VM
+ """
+ guest_cmd = self.vm_power_dir + \
+ "guest_cli/build/guest_vm_power_mgr -c 0xf -n 4 -- -i"
+ out = self.vm_dut.send_expect(guest_cmd, "vmpower\(guest\)>", 120)
+ self.verify("now connected" in out,
+ "Power manager guest failed to connect")
+
+ for vcpu in range(self.core_num):
+ self.vm_dut.send_expect(
+ "set_cpu_freq %d max" % vcpu, "vmpower\(guest\)>")
+
+ for vcpu in range(self.core_num):
+ # map between host cpu and guest cpu
+ ori_freq = self.get_cpu_frequency(self.vcpu_map[vcpu])
+ # get cpu frequencies range
+ freqs = self.get_cpu_freqs(vcpu)
+
+ for loop in range(len(freqs)-1):
+ # connect vm power host and guest
+ self.vm_dut.send_expect(
+ "set_cpu_freq %d down" % vcpu, "vmpower\(guest\)>")
+ cur_freq = self.get_cpu_frequency(self.vcpu_map[vcpu])
+ print dts.GREEN("After freqency down, freq is %d\n" % cur_freq)
+ self.verify(
+ ori_freq > cur_freq, "Cpu freqenecy can not scale down")
+ ori_freq = cur_freq
+
+ self.vm_dut.send_expect("quit", "# ")
+
+ def test_vm_power_managment_frequp(self):
+ """
+ Check host cpu frequency can scale up in VM
+ """
+ guest_cmd = self.vm_power_dir + \
+ "guest_cli/build/guest_vm_power_mgr -c 0xf -n 4 -- -i"
+ out = self.vm_dut.send_expect(guest_cmd, "vmpower\(guest\)>", 120)
+ self.verify("now connected" in out,
+ "Power manager guest failed to connect")
+
+ for vcpu in range(self.core_num):
+ self.vm_dut.send_expect(
+ "set_cpu_freq %d min" % vcpu, "vmpower\(guest\)>")
+
+ for vcpu in range(self.core_num):
+ ori_freq = self.get_cpu_frequency(self.vcpu_map[vcpu])
+ # get cpu frequencies range
+ freqs = self.get_cpu_freqs(vcpu)
+ for loop in range(len(freqs)-1):
+ self.vm_dut.send_expect(
+ "set_cpu_freq %d up" % vcpu, "vmpower\(guest\)>")
+ cur_freq = self.get_cpu_frequency(self.vcpu_map[vcpu])
+ print dts.GREEN("After freqency up, freq is %d\n" % cur_freq)
+ self.verify(
+ cur_freq > ori_freq, "Cpu freqenecy can not scale up")
+ ori_freq = cur_freq
+
+ self.vm_dut.send_expect("quit", "# ")
+
+ def test_vm_power_managment_freqmax(self):
+ """
+ Check host cpu frequency can scale to max in VM
+ """
+ guest_cmd = self.vm_power_dir + \
+ "guest_cli/build/guest_vm_power_mgr -c 0xf -n 4 -- -i"
+ out = self.vm_dut.send_expect(guest_cmd, "vmpower\(guest\)>", 120)
+ self.verify("now connected" in out,
+ "Power manager guest failed to connect")
+
+ max_freq_path = "cat /sys/devices/system/cpu/cpu%s/cpufreq/" + \
+ "cpuinfo_max_freq"
+ for vcpu in range(self.core_num):
+ self.vm_dut.send_expect(
+ "set_cpu_freq %d max" % vcpu, "vmpower\(guest\)>")
+ freq = self.get_cpu_frequency(self.vcpu_map[vcpu])
+
+ out = self.dut.alt_session.send_expect(
+ max_freq_path % self.vcpu_map[vcpu], "# ")
+ max_freq = int(out)
+
+ self.verify(freq == max_freq, "Cpu max frequency not correct")
+ print dts.GREEN("After freqency max, freq is %d\n" % max_freq)
+ self.vm_dut.send_expect("quit", "# ")
+
+ def test_vm_power_managment_freqmin(self):
+ """
+ Check host cpu frequency can scale to min in VM
+ """
+ guest_cmd = self.vm_power_dir + \
+ "guest_cli/build/guest_vm_power_mgr -c 0xf -n 4 -- -i"
+ out = self.vm_dut.send_expect(guest_cmd, "vmpower\(guest\)>", 120)
+ self.verify("now connected" in out,
+ "Power manager guest failed to connect")
+
+ min_freq_path = "cat /sys/devices/system/cpu/cpu%s/cpufreq/" + \
+ "cpuinfo_min_freq"
+ for vcpu in range(self.core_num):
+ self.vm_dut.send_expect(
+ "set_cpu_freq %d min" % vcpu, "vmpower\(guest\)>")
+ freq = self.get_cpu_frequency(self.vcpu_map[vcpu])
+
+ out = self.dut.alt_session.send_expect(
+ min_freq_path % self.vcpu_map[vcpu], "# ")
+ min_freq = int(out)
+
+ self.verify(freq == min_freq, "Cpu min frequency not correct")
+ print dts.GREEN("After freqency min, freq is %d\n" % min_freq)
+ self.vm_dut.send_expect("quit", "# ")
+
+ def test_vm_power_multivms(self):
+ """
+ Check power management channel connected in multiple VMs
+ """
+ vm_name = "vm1"
+ vm2 = LibvirtKvm(self.dut, vm_name, self.suite)
+ channels = [
+ {'path': '/tmp/powermonitor/%s.0' %
+ vm_name, 'name': 'virtio.serial.port.poweragent.0'},
+ {'path': '/tmp/powermonitor/%s.1' %
+ vm_name, 'name': 'virtio.serial.port.poweragent.1'},
+ {'path': '/tmp/powermonitor/%s.2' %
+ vm_name, 'name': 'virtio.serial.port.poweragent.2'},
+ {'path': '/tmp/powermonitor/%s.3' %
+ vm_name, 'name': 'virtio.serial.port.poweragent.3'}
+ ]
+ for channel in channels:
+ vm2.add_vm_virtio_serial_channel(**channel)
+ vm2_dut = vm2.start()
+
+ self.dut.send_expect("add_vm %s" % vm_name, "vmpower>")
+ self.dut.send_expect("add_channels %s all" % vm_name, "vmpower>")
+ vm_info = self.dut.send_expect("show_vm %s" % vm_name, "vmpower>")
+
+ out = vm2_dut.build_dpdk_apps("examples/vm_power_manager/guest_cli")
+ self.verify("Error" not in out, "Compilation error")
+ self.verify("No such" not in out, "Compilation error")
+
+ guest_cmd = self.vm_power_dir + \
+ "guest_cli/build/guest_vm_power_mgr -c 0xf -n 4 -- -i"
+ out = vm2_dut.send_expect(guest_cmd, "vmpower\(guest\)>", 120)
+ self.verify("now connected" in out,
+ "Power manager guest failed to connect")
+ vm2_dut.send_expect("quit", "# ")
+ vm2.stop()
+
+ def test_perf_vmpower_latency(self):
+ """
+ Measure packet latency in VM
+ """
+ latency_header = ['Frame Size', 'Max latency', 'Min lantecy',
+ 'Avg latency']
+
+ dts.results_table_add_header(latency_header)
+
+ rx_port = self.dut_ports[0]
+ tx_port = self.dut_ports[1]
+
+ # build l3fwd-power
+ out = self.vm_dut.send_expect("make -C examples/l3fwd-power", "# ")
+ self.verify("Error" not in out, "compilation error 1")
+ self.verify("No such file" not in out, "compilation error 2")
+ # start l3fwd-power
+ l3fwd_app = "./examples/l3fwd-power/build/l3fwd-power"
+
+ cmd = l3fwd_app + " -c 6 -n 4 -- -p 0x3 --config " + \
+ "'(0,0,1),(1,0,2)'"
+
+ self.vm_dut.send_expect(cmd, "L3FWD_POWER: entering main loop")
+
+ for frame_size in self.frame_sizes:
+ # Prepare traffic flow
+ payload_size = frame_size - HEADER_SIZE['udp'] - \
+ HEADER_SIZE['ip'] - HEADER_SIZE['eth']
+ dmac = self.dut.get_mac_address(self.dut_ports[0])
+ flow = 'Ether(dst="%s")/IP(dst="2.1.1.0")/UDP()' % dmac + \
+ '/Raw("X"*%d)' % payload_size
+ self.tester.scapy_append('wrpcap("vmpower.pcap", [%s])' % flow)
+ self.tester.scapy_execute()
+
+ tgen_input = []
+ tgen_input.append((self.tester.get_local_port(rx_port),
+ self.tester.get_local_port(tx_port),
+ "vmpower.pcap"))
+ # run traffic generator
+ [latency] = self.tester.traffic_generator_latency(tgen_input)
+ print latency
+ table_row = [frame_size, latency['max'], latency['min'],
+ latency['average']]
+ dts.results_table_add_row(table_row)
+
+ dts.results_table_print()
+
+ self.vm_dut.kill_all()
+
+ def test_perf_vmpower_frequency(self):
+ """
+ Measure cpu frequency fluctuate with work load
+ """
+ latency_header = ['Tx linerate%', 'Rx linerate%', 'Cpu freq']
+
+ dts.results_table_add_header(latency_header)
+
+ rx_port = self.dut_ports[0]
+ tx_port = self.dut_ports[1]
+
+ # build l3fwd-power
+ out = self.vm_dut.send_expect("make -C examples/l3fwd-power", "# ")
+ self.verify("Error" not in out, "compilation error 1")
+ self.verify("No such file" not in out, "compilation error 2")
+ # start l3fwd-power
+ l3fwd_app = "./examples/l3fwd-power/build/l3fwd-power"
+
+ cmd = l3fwd_app + " -c 6 -n 4 -- -p 0x3 --config " + \
+ "'(0,0,1),(1,0,2)'"
+
+ self.vm_dut.send_expect(cmd, "L3FWD_POWER: entering main loop")
+
+ for rate in self.perf_rates:
+ # Prepare traffic flow
+ payload_size = self.def_framesize - HEADER_SIZE['udp'] - \
+ HEADER_SIZE['ip'] - HEADER_SIZE['eth']
+ dmac = self.dut.get_mac_address(self.dut_ports[0])
+ flow = 'Ether(dst="%s")/IP(dst="2.1.1.0")/UDP()' % dmac + \
+ '/Raw("X"*%d)' % payload_size
+ self.tester.scapy_append('wrpcap("vmpower.pcap", [%s])' % flow)
+ self.tester.scapy_execute()
+
+ tgen_input = []
+ tgen_input.append((self.tester.get_local_port(rx_port),
+ self.tester.get_local_port(tx_port),
+ "vmpower.pcap"))
+
+ # register hook function for current cpu frequency
+ self.hook_transmissoin_func = self.get_freq_in_transmission
+ self.tester.extend_external_packet_generator(TestVmPowerManager,
+ self)
+ # run traffic generator, run 20 seconds for frequency stable
+ _, pps = self.tester.traffic_generator_throughput(tgen_input,
+ rate,
+ delay=20)
+ pps /= 1000000.0
+ freq = self.cur_freq / 1000000.0
+ wirespeed = self.wirespeed(self.nic, self.def_framesize, 1)
+ pct = pps * 100 / wirespeed
+ table_row = [rate, pct, freq]
+ dts.results_table_add_row(table_row)
+
+ dts.results_table_print()
+
+ self.vm_dut.kill_all()
+
+ def get_freq_in_transmission(self):
+ self.cur_freq = self.get_cpu_frequency(self.vcpu_map[1])
+ print dts.GREEN("Current cpu frequency %d" % self.cur_freq)
+
+ def get_max_freq(self, core_num):
+ freq_path = "cat /sys/devices/system/cpu/cpu%d/cpufreq/" + \
+ "cpuinfo_max_freq"
+
+ out = self.dut.alt_session.send_expect(freq_path % core_num, "# ")
+ freq = int(out)
+ return freq
+
+ def get_min_freq(self, core_num):
+ freq_path = "cat /sys/devices/system/cpu/cpu%d/cpufreq/" + \
+ "cpuinfo_min_freq"
+
+ out = self.dut.alt_session.send_expect(freq_path % core_num, "# ")
+ freq = int(out)
+ return freq
+
+ def get_cpu_freqs(self, core_num):
+ freq_path = "cat /sys/devices/system/cpu/cpu%d/cpufreq/" + \
+ "scaling_available_frequencies"
+
+ out = self.dut.alt_session.send_expect(freq_path % core_num, "# ")
+ freqs = out.split()
+ return freqs
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.vm_dut.send_expect("quit", "# ")
+ pass
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.dut.send_expect("quit", "# ")
+ self.vm.stop()
+ pass
--
1.9.3
^ permalink raw reply [flat|nested] 4+ messages in thread