* [dts] [PATCH V1 2/3] test_plans/runtime_vf_queue_number_maxinum: update test plan
2019-10-12 9:57 [dts] [PATCH V1 1/3] framework/pmd_output:fix a bug in execute_cmd Haiyang Zhao
@ 2019-10-12 9:57 ` Haiyang Zhao
2019-10-12 9:57 ` [dts] [PATCH V1 3/3] tests: add runtime_vf_queue_number_maxinum Haiyang Zhao
2019-10-21 2:23 ` [dts] [PATCH V1 1/3] framework/pmd_output:fix a bug in execute_cmd Tu, Lijuan
2 siblings, 0 replies; 4+ messages in thread
From: Haiyang Zhao @ 2019-10-12 9:57 UTC (permalink / raw)
To: dts; +Cc: Haiyang Zhao
*.add description of related noun.
*.split a new test case from the old case.
*.modified the old case, and add more description.
Signed-off-by: Haiyang Zhao <haiyangx.zhao@intel.com>
---
.../runtime_vf_queue_number_maxinum_test_plan.rst | 103 ++++++++++++++-------
1 file changed, 68 insertions(+), 35 deletions(-)
diff --git a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst
index 212cff3..1e287e4 100644
--- a/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst
+++ b/test_plans/runtime_vf_queue_number_maxinum_test_plan.rst
@@ -43,6 +43,34 @@ Feature Description
see runtime_vf_queue_number_test_plan.rst
+- Hardware maximum queues
+ The datasheet xl710-10-40-controller-datasheet2017.pdf described in page 10:
+ "The 710 series supports up to 1536 LQPs that can be assigned to PFs or VFs as needed".
+
+ For four ports Fortville NIC, each port has 384 queues,
+ the total queues number is 384 * 4 = 1536.
+ For two ports Fortville NIC, each port has 768 queues,
+ the total queues number is 768 * 2 = 1536.
+
+- Queues PF used
+ According to the i40e driver source code, it will alloc 1 queue for FDIR function,
+ and alloc 64 queues for PF(each PF support up to 64 queues) at the initialization period.
+ So PF will use 64 + 1 = 65 queues.
+
+- Reserved queues per VF
+ The firmware will reserve 4 queues for each vf as default, when requested queues exceed 4,
+ it need to realloc queues in the left queues, the reserved queues generally can't be reused.
+
+- Max Reserved queues per VF
+ The reserved queues can be modified by testpmd parameter "queue-num-per-vf".
+ VF queue number must be power of 2 and equal or less than 16.
+
+ Four ports NIC can create 32 vfs per PF, max reserved queues per VF = (384 - 65) / 32 = 9.96875,
+ so max value can been set is queue-num-per-vf=8.
+ Two ports NIC can create 64 vfs per PF, max reserved queues per VF = (768- 65) / 64 = 10.984375,
+ so max value can been set is queue-num-per-vf=8.
+
+
Prerequisites
=============
@@ -58,8 +86,8 @@ Prerequisites
3. Scenario:
DPDK PF + DPDK VF
-Test case 1: set VF max queue number with max VFs on one PF port
-================================================================
+Set up scenario
+===============
1. Set up max VFs from one PF with DPDK driver
Create 32 vfs on four ports fortville NIC::
@@ -74,60 +102,65 @@ Test case 1: set VF max queue number with max VFs on one PF port
./usertools/dpdk-devbind.py -b vfio-pci 05:02.0 05:05.7
-2. Set VF max queue number to 16::
-
- ./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=16 \
- --file-prefix=test1 --socket-mem 1024,1024 -- -i
- PF port failed to started with "i40e_pf_parameter_init():
- Failed to allocate 577 queues, which exceeds the hardware maximum 384"
- If create 64 vfs, the maximum is 768.
-3. Set VF max queue number to 8::
+Test case 1: VF consume max queue number on one PF port
+================================================================
+1. Start the PF testpmd::
- ./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=8 \
- --file-prefix=test1 --socket-mem 1024,1024 -- -i
+ ./testpmd -c f -n 4 -w 05:00.0 --file-prefix=test1 \
+ --socket-mem 1024,1024 -- -i
-4. Start the two VFs testpmd with "--rxq=8 --txq=8" and "--rxq=6 --txq=6"::
+2. Start the two testpmd to consume maximum queues::
+ Set '--rxq=16 --txq=16' for the first testpmd,
+ So four ports NIC can start (384 - 65 - 32 * 4)/16 = int(11.9375) = 11 VFs on one PF,
+ the left queues are 384 - 65 - 32 * 4 - 11 * 16 = 15.
+ two ports NIC can start (768 - 65 - 64 * 4)/16 = int(27.9375) = 27 VFS on one PF,
+ the left queues are 768 - 65 - 64 * 4 - 27 * 16 = 15.
+ The driver will alloc queues as power of 2, and queue must be equal or less than 16,
+ so the second VF testpmd can only start '--rxq=8 --txq=8'::
- ./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \
- --socket-mem 1024,1024 -- -i --rxq=8 --txq=8
+ ./testpmd -c 0xf0 -n 4 -w 05:02.0 -w 05:02.1 -w 05:02.2 -w... --file-prefix=test2 \
+ --socket-mem 1024,1024 -- -i --rxq=16 --txq=16
./testpmd -c 0xf00 -n 4 -w 05:05.7 --file-prefix=test3 \
- --socket-mem 1024,1024 -- -i --rxq=6 --txq=6
+ --socket-mem 1024,1024 -- -i --rxq=8 --txq=8
- Check the Max possible RX queues and TX queues of the two VFs are both 8::
+ Check the Max possible RX queues and TX queues of the two VFs are both 16::
testpmd> show port info all
- Max possible RX queues: 8
- Max possible TX queues: 8
+ Max possible RX queues: 16
+ Max possible TX queues: 16
Start forwarding, you can see the actual queue number
VF0::
testpmd> start
- RX queues=8 - RX desc=128 - RX free threshold=32
- TX queues=8 - TX desc=512 - TX free threshold=32
+ RX queues=16 - RX desc=128 - RX free threshold=32
+ TX queues=16 - TX desc=512 - TX free threshold=32
VF1::
testpmd> start
- RX queues=6 - RX desc=128 - RX free threshold=32
- TX queues=6 - TX desc=512 - TX free threshold=32
+ RX queues=8 - RX desc=128 - RX free threshold=32
+ TX queues=8 - TX desc=512 - TX free threshold=32
- Modify the queue number of VF1::
+3. Send 256 packets to VF0 and VF1, make sure packets can be distributed
+ to all the queues.
- testpmd> stop
- testpmd> port stop all
- testpmd> port config all rxq 8
- testpmd> port config all txq 7
- testpmd> port start all
+Test case 2: set max queue number per vf on one pf port
+================================================================
+1. Start the PF testpmd with VF max queue number 16::
+ As the feature description describe, the max value of queue-num-per-vf is 8
+ for Both two and four ports Fortville NIC::
- Start forwarding, you can see the VF1 actual queue number is 8 and 7::
+ ./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=16 --file-prefix=test1 \
+ --socket-mem 1024,1024 -- -i
- testpmd> start
- RX queues=8 - RX desc=128 - RX free threshold=32
- TX queues=7 - TX desc=512 - TX free threshold=32
+ PF port failed to started with "i40e_pf_parameter_init():
+ Failed to allocate 577 queues, which exceeds the hardware maximum 384"
+ If create 64 vfs, the maximum is 768.
+
+
+ The testpmd should not crash.
-5. Send 256 packets to VF0 and VF1, make sure packets can be distributed
- to all the queues.
--
1.8.3.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dts] [PATCH V1 3/3] tests: add runtime_vf_queue_number_maxinum
2019-10-12 9:57 [dts] [PATCH V1 1/3] framework/pmd_output:fix a bug in execute_cmd Haiyang Zhao
2019-10-12 9:57 ` [dts] [PATCH V1 2/3] test_plans/runtime_vf_queue_number_maxinum: update test plan Haiyang Zhao
@ 2019-10-12 9:57 ` Haiyang Zhao
2019-10-21 2:23 ` [dts] [PATCH V1 1/3] framework/pmd_output:fix a bug in execute_cmd Tu, Lijuan
2 siblings, 0 replies; 4+ messages in thread
From: Haiyang Zhao @ 2019-10-12 9:57 UTC (permalink / raw)
To: dts; +Cc: Haiyang Zhao
*.add new test suite runtime_vf_queue_number_maxinum.
Signed-off-by: Haiyang Zhao <haiyangx.zhao@intel.com>
---
tests/TestSuite_runtime_vf_queue_number_maxinum.py | 301 +++++++++++++++++++++
1 file changed, 301 insertions(+)
create mode 100644 tests/TestSuite_runtime_vf_queue_number_maxinum.py
diff --git a/tests/TestSuite_runtime_vf_queue_number_maxinum.py b/tests/TestSuite_runtime_vf_queue_number_maxinum.py
new file mode 100644
index 0000000..ae41944
--- /dev/null
+++ b/tests/TestSuite_runtime_vf_queue_number_maxinum.py
@@ -0,0 +1,301 @@
+# BSD LICENSE
+#
+# Copyright(c) <2019> Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+'''
+DPDK Test suite.
+
+'''
+
+import time
+import re
+import math
+from test_case import TestCase
+from pmd_output import PmdOutput
+
+
+class TestRuntimeVfQnMaxinum(TestCase):
+ supported_vf_driver = ['igb_uio', 'vfio-pci']
+ rss_key = '6EA6A420D5138E712433B813AE45B3C4BECB2B405F31AD6C331835372D15E2D5E49566EE0ED1962AFA1B7932F3549520FD71C75E'
+ max_white_per_testpmd = 18
+
+ def set_up_all(self):
+ self.verify(self.nic in ["fortville_eagle", "fortville_spirit", "fortville_25g", "fortpark_TLV"],
+ "Only supported by Fortville")
+ self.dut_ports = self.dut.get_ports(self.nic)
+ self.verify(len(self.dut_ports) >= 1, 'Insufficient ports')
+ self.src_intf = self.tester.get_interface(self.tester.get_local_port(0))
+ self.src_mac = self.tester.get_mac(self.tester.get_local_port(0))
+ self.dst_mac = self.dut.get_mac_address(0)
+ self.pf_pci = self.dut.ports_info[self.dut_ports[0]]['pci']
+ self.used_dut_port = self.dut_ports[0]
+ self.pmdout = PmdOutput(self.dut)
+ self.setup_test_env('igb_uio')
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def setup_test_env(self, driver='default'):
+ """
+ Bind fortville nic to DPDK PF, and create 32/64 vfs on it.
+ Start testpmd based on the created vfs.
+ """
+ if self.nic in ['fortville_eagle']:
+ self.dut.generate_sriov_vfs_by_port(self.used_dut_port, 32, driver=driver)
+ elif self.nic in ['fortville_25g', 'fortville_spirit', 'fortpark_TLV']:
+ self.dut.generate_sriov_vfs_by_port(self.used_dut_port, 64, driver=driver)
+
+ self.sriov_vfs_port_0 = self.dut.ports_info[self.used_dut_port]['vfs_port']
+
+ # set vf assign method and vf driver
+ self.vf_driver = self.get_suite_cfg()['vf_driver']
+ if self.vf_driver is None:
+ self.vf_driver = 'vfio-pci'
+ self.verify(self.vf_driver in self.supported_vf_driver, "Unspported vf driver")
+
+ for port in self.sriov_vfs_port_0:
+ port.bind_driver(self.vf_driver)
+ self.vf1_session = self.dut.new_session()
+ self.vf2_session = self.dut.new_session()
+ self.pf_pmdout = PmdOutput(self.dut)
+ self.vf1_pmdout = PmdOutput(self.dut, self.vf1_session)
+ self.vf2_pmdout = PmdOutput(self.dut, self.vf2_session)
+
+ def destroy_test_env(self):
+ if getattr(self, 'pf_pmdout', None):
+ self.pf_pmdout.execute_cmd('quit', '# ')
+ self.pf_pmdout = None
+
+ if getattr(self, 'vf1_pmdout', None):
+ self.vf1_pmdout.execute_cmd('quit', '# ', timeout=200)
+ self.vf1_pmdout = None
+ if getattr(self, 'vf1_session', None):
+ self.dut.close_session(self.vf1_session)
+
+ if getattr(self, 'vf2_pmdout', None):
+ self.vf2_pmdout.execute_cmd('quit', '# ')
+ self.vf2_pmdout = None
+ if getattr(self, 'vf2_session', None):
+ self.dut.close_session(self.vf2_session)
+
+ if getattr(self, 'vf3_pmdout', None):
+ self.vf3_pmdout.execute_cmd('quit', '# ', timeout=150)
+ self.vf3_pmdout = None
+ if getattr(self, 'vf3_session', None):
+ self.dut.close_session(self.vf3_session)
+
+ # reset used port's sriov
+ if getattr(self, 'used_dut_port', None):
+ self.dut.destroy_sriov_vfs_by_port(self.used_dut_port)
+ port = self.dut.ports_info[self.used_dut_port]['port']
+ port.bind_driver()
+ self.used_dut_port = None
+
+ def send_packet(self, dest_mac, itf, count):
+ """
+ Sends packets.
+ """
+ self.tester.scapy_foreground()
+ time.sleep(2)
+ for i in range(count):
+ quotient = i // 254
+ remainder = i % 254
+ packet = r'sendp([Ether(dst="{0}", src=get_if_hwaddr("{1}"))/IP(src="10.0.{2}.{3}", ' \
+ r'dst="192.168.{2}.{3}")],iface="{4}")'.format(dest_mac, itf, quotient, remainder + 1, itf)
+ self.tester.scapy_append(packet)
+ self.tester.scapy_execute()
+ time.sleep(2)
+
+ def test_vf_consume_max_queues_on_one_pf(self):
+ """
+ Test case 1: VF consume max queue number on one PF port.
+ For four port fortville nic, each port has 384 queues,
+ and for two port fortville nic, each port has 768 queues.
+ PF will use 65 queues on each port, the firmware will reserve 4 queues
+ for each vf, when requested queues exceed 4 queues, it need to realloc queues
+ in the left queues, the reserved queues generally can't be reused.
+ """
+ pf_eal_param = '-w {} --file-prefix=test1 --socket-mem 1024,1024'.format(self.pf_pci)
+ self.pf_pmdout.start_testpmd(self.pf_pmdout.default_cores, eal_param=pf_eal_param)
+ vf1_white_index = 0
+ vf1_white_list = ''
+ vf2_queue_number = 0
+ vf3_white_index = 0
+ vf3_white_list = ''
+ if self.nic in ['fortville_eagle']:
+ left_queues = 384 - 65 - 32 * 4
+ vf1_white_index = left_queues / 16
+ vf2_queue_number = left_queues % 16
+ elif self.nic in ['fortville_25g', 'fortville_spirit', 'fortpark_TLV']:
+ left_queues = 768 - 65 - 64 * 4
+ vf1_white_index = left_queues / 16
+ vf2_queue_number = left_queues % 16
+
+ # The white list max length is 18
+ if vf1_white_index > self.max_white_per_testpmd:
+ vf3_white_index = vf1_white_index % self.max_white_per_testpmd
+ vf1_white_index = vf1_white_index - vf3_white_index
+ self.vf3_session = self.dut.new_session()
+ self.vf3_pmdout = PmdOutput(self.dut, self.vf3_session)
+
+ self.logger.info('vf2_queue_number: {}, vf3_white_index: {}'.format(vf2_queue_number, vf3_white_index))
+
+ if vf2_queue_number > 0:
+ # The driver will alloc queues as power of 2, and queue must be equal or less than 16
+ vf2_queue_number = pow(2, int(math.log(vf2_queue_number, 2)))
+
+ # we test found that if vfs do not sort, the vf2 testpmd could not start
+ vf_pcis = []
+ for vf in self.sriov_vfs_port_0:
+ vf_pcis.append(vf.pci)
+ vf_pcis.sort()
+ for pci in vf_pcis[:vf1_white_index]:
+ vf1_white_list = vf1_white_list + '-w {} '.format(pci)
+ for pci in vf_pcis[vf1_white_index:vf1_white_index+vf3_white_index]:
+ vf3_white_list = vf3_white_list + '-w {} '.format(pci)
+
+ self.logger.info('vf1 white list: {}'.format(vf1_white_list))
+ self.logger.info('vf3_white_list: {}'.format(vf3_white_list))
+ self.logger.info('vf2_queue_number: {}'.format(vf2_queue_number))
+
+ vf1_eal_param = '{} --file-prefix=vf1 --socket-mem 1024,1024'.format(vf1_white_list)
+ self.start_testpmd_multiple_times(self.vf1_pmdout, '--rxq=16 --txq=16', vf1_eal_param)
+
+ if vf3_white_index > 0:
+ vf3_eal_param = '{} --file-prefix=vf3 --socket-mem 1024,1024'.format(vf3_white_list)
+ self.start_testpmd_multiple_times(self.vf3_pmdout, '--rxq=16 --txq=16', vf3_eal_param)
+
+ if vf2_queue_number > 0:
+ vf2_eal_param = '-w {} --file-prefix=vf2 --socket-mem 1024,1024'.format(
+ vf_pcis[vf1_white_index+vf3_white_index])
+ self.vf2_pmdout.start_testpmd(self.vf2_pmdout.default_cores, param='--rxq={0} --txq={0}'.format(
+ vf2_queue_number), eal_param=vf2_eal_param)
+
+ # Check the Max possible RX queues and TX queues of the two VFs
+ vf1_out = self.vf1_pmdout.execute_cmd('show port info all')
+ self.verify('Max possible RX queues: 16' in vf1_out, 'vf1 max RX queues is not 16')
+ if vf2_queue_number > 0:
+ vf2_out = self.vf2_pmdout.execute_cmd('show port info all')
+ self.verify('Max possible RX queues: 16' in vf2_out, 'vf2 max RX queues is not 16')
+ if vf3_white_index > 0:
+ vf3_out = self.vf3_pmdout.execute_cmd('show port info all')
+ self.verify('Max possible RX queues: 16' in vf3_out, 'vf3 max RX queues is not 16')
+
+ # check the actual queue number
+ vf1_out = self.vf1_pmdout.execute_cmd('start')
+ self.verify('RX queue number: 16 Tx queue number: 16' in vf1_out, 'vf1 actual RX/TX queues is not 16')
+ if vf2_queue_number > 0:
+ vf2_out = self.vf2_pmdout.execute_cmd('start')
+ self.verify('port 0: RX queue number: {0} Tx queue number: {0}'.format(vf2_queue_number) in vf2_out,
+ 'vf2 actual RX/TX queues is not {}'.format(vf2_queue_number))
+ if vf3_white_index > 0:
+ vf3_out = self.vf3_pmdout.execute_cmd('start')
+ self.verify('RX queue number: 16 Tx queue number: 16' in vf3_out,
+ 'vf3 actual RX/TX queues is not 16')
+
+ # Set all the ports share a same rss-hash key in testpmd vf1, vf3
+ for i in range(vf1_white_index):
+ self.vf1_pmdout.execute_cmd('port config {} rss-hash-key ipv4 {}'.format(i, self.rss_key))
+
+ for i in range(vf3_white_index):
+ self.vf3_pmdout.execute_cmd('port config {} rss-hash-key ipv4 {}'.format(i, self.rss_key))
+
+ # send packets to vf1/vf2, and check all the queues could receive packets
+ # as set promisc on, packets send by tester could be received by both vf1 and vf2
+ self.vf1_pmdout.execute_cmd('set promisc all on')
+ if vf2_queue_number > 0:
+ self.vf2_pmdout.execute_cmd('set promisc all on')
+ if vf3_white_index > 0:
+ self.vf3_pmdout.execute_cmd('set promisc all on')
+
+ self.send_packet('00:11:22:33:44:55', self.src_intf, 256)
+ vf1_out = self.vf1_pmdout.execute_cmd('stop')
+ if vf2_queue_number > 0:
+ vf2_out = self.vf2_pmdout.execute_cmd('stop')
+ if vf3_white_index > 0:
+ vf3_out = self.vf3_pmdout.execute_cmd('stop')
+
+ # check all queues in vf1 can receive packets
+ for i in range(16):
+ for j in range(vf1_white_index):
+ self.verify('Forward Stats for RX Port={:>2d}/Queue={:>2d}'.format(j, i) in vf1_out,
+ 'Testpmd vf1 port {}, queue {} did not receive packet'.format(j, i))
+ for m in range(vf3_white_index):
+ self.verify('Forward Stats for RX Port={:>2d}/Queue={:>2d}'.format(m, i) in vf3_out,
+ 'Testpmd vf3 port {}, queue {} did not receive packet'.format(m, i))
+
+ # check all queues in vf2 can receive packets
+ for i in range(vf2_queue_number):
+ self.verify('Forward Stats for RX Port= 0/Queue={:>2d}'.format(i) in vf2_out,
+ 'Testpmd vf2 queue {} did not receive packet'.format(i))
+
+ def test_set_max_queue_per_vf_on_one_pf(self):
+ """
+ Test case 2: set max queue number per vf on one pf port
+ Testpmd should not crash.
+ """
+ # test queue-number-per-vf exceed hardware maximum
+ pf_eal_param = '-w {},queue-num-per-vf=16 --file-prefix=test1 --socket-mem 1024,1024'.format(self.pf_pci)
+ out = self.pf_pmdout.start_testpmd(self.pf_pmdout.default_cores, eal_param=pf_eal_param)
+ self.verify('exceeds the hardware maximum' in out, 'queue number per vf limited error')
+ out = self.pf_pmdout.execute_cmd('start')
+ self.verify('io packet forwarding' in out, 'testpmd not work normally')
+
+ def start_testpmd_multiple_times(self, pmdout, param, eal_param, retry_times=3):
+ # There is bug that start testpmd with multiple vf for the first time will fail,
+ # and it has been fixed at commit id dbda2092deb5ee5988449330c6e28e9d1fb97c19.
+ while retry_times:
+ try:
+ pmdout.start_testpmd(pmdout.default_cores, param=param, eal_param=eal_param)
+ break
+ except Exception as e:
+ self.logger.info('start testpmd occurred exception: {}'.format(e))
+ retry_times = retry_times - 1
+ time.sleep(1)
+ self.logger.info('try start testpmd the {} times'.format(retry_times))
+
+ def tear_down(self):
+ if getattr(self, 'pf_pmdout', None):
+ self.pf_pmdout.execute_cmd('quit', '# ')
+
+ if getattr(self, 'vf1_pmdout', None):
+ self.vf1_pmdout.execute_cmd('quit', '# ', timeout=200)
+
+ if getattr(self, 'vf2_pmdout', None):
+ self.vf2_pmdout.execute_cmd('quit', '# ')
+
+ if getattr(self, 'vf3_pmdout', None):
+ self.vf3_pmdout.execute_cmd('quit', '# ', timeout=150)
+
+ def tear_down_all(self):
+ self.destroy_test_env()
--
1.8.3.1
^ permalink raw reply [flat|nested] 4+ messages in thread