test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V1 0/2] pmd_stacked_bonded: upload test plan and automation script
@ 2018-06-06  5:37 yufengx.mo
  2018-06-06  5:37 ` [dts] [PATCH V1 1/2] pmd_stacked_bonded: upload test plan yufengx.mo
  2018-06-06  5:37 ` [dts] [PATCH V1 2/2] pmd_stacked_bonded: upload automation script yufengx.mo
  0 siblings, 2 replies; 4+ messages in thread
From: yufengx.mo @ 2018-06-06  5:37 UTC (permalink / raw)
  To: dts; +Cc: yufengmx

From: yufengmx <yufengx.mo@intel.com>

*. metrics test plan 
*. metrics automation script 

yufengmx (2):
  pmd_stacked_bonded: upload test plan
  pmd_stacked_bonded: upload automation script

 test_plans/pmd_stacked_bonded_test_plan.rst |  340 ++++++
 tests/TestSuite_pmd_stacked_bonded.py       | 1593 +++++++++++++++++++++++++++
 2 files changed, 1933 insertions(+)
 create mode 100644 test_plans/pmd_stacked_bonded_test_plan.rst
 create mode 100644 tests/TestSuite_pmd_stacked_bonded.py

-- 
1.9.3

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V1 1/2] pmd_stacked_bonded: upload test plan
  2018-06-06  5:37 [dts] [PATCH V1 0/2] pmd_stacked_bonded: upload test plan and automation script yufengx.mo
@ 2018-06-06  5:37 ` yufengx.mo
  2018-06-06  5:37 ` [dts] [PATCH V1 2/2] pmd_stacked_bonded: upload automation script yufengx.mo
  1 sibling, 0 replies; 4+ messages in thread
From: yufengx.mo @ 2018-06-06  5:37 UTC (permalink / raw)
  To: dts; +Cc: yufengmx

From: yufengmx <yufengx.mo@intel.com>


This test plan is for pmd stacked bonded feature.

Allow bonded devices to be stacked to allow two or more bonded devices to be
bonded into one master bonded device

Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
 test_plans/pmd_stacked_bonded_test_plan.rst | 340 ++++++++++++++++++++++++++++
 1 file changed, 340 insertions(+)
 create mode 100644 test_plans/pmd_stacked_bonded_test_plan.rst

diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst b/test_plans/pmd_stacked_bonded_test_plan.rst
new file mode 100644
index 0000000..554af1a
--- /dev/null
+++ b/test_plans/pmd_stacked_bonded_test_plan.rst
@@ -0,0 +1,340 @@
+.. Copyright (c) <2017>, Intel Corporation
+   All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+stacked Bonded
+==============
+
+The demand arises from a discussion with a prospective customer for a 100G NIC 
+based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs 
+support a proper x16 PCIe interface so the host sees a single netdev and that 
+netdev corresponds directly to the 100G Ethernet port. They indicated that in 
+their current system they bond multiple 100G NICs together, using DPDK bonding 
+API in their application. They are interested in looking at an alternatve source
+for the 100G NIC and are in conversation with Silicom who are shipping a 100G 
+RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC 
+is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the 
+DPDK bonding could operate at 1st level on the two RRC netdevs to present a 
+single netdev could the application then bond multiple of these bonded 
+interfaces to implement NIC bonding.
+
+Prerequisites for Bonding
+=========================
+
+*. hardware configuration
+  all link ports of tester/dut should be the same data rate and support full-duplex.
+  Slave down testing need four ports at least, other testing items can run on two
+  ports. 
+
+  testing hardware configuration
+  ==============================
+  NIC/DUT/TESTER ports requriements.
+  - DUT:  4 ports of nic.
+  - TESTER:  4 ports of nic.
+
+  Connections ports between TESTER and DUT
+       TESTER                                DUT
+               physical link             logical link
+     .--------.            .------------------------------------------.
+     | portA0 | <--------> | portB0 <---> .--------.                  |
+     |        |            |              | bond 0 | <-----> .------. |
+     | portA1 | <--------> | portB1 <---> '--------'         |      | |
+     |        |            |                                 |bond 2| |
+     | portA2 | <--------> | portB2 <---> .--------.         |      | |
+     |        |            |              | bond 1 | <-----> '------' |
+     | portA3 | <--------> | portB3 <---> '--------'                  |
+     '--------'            '------------------------------------------'
+
+Test Case : basic behavior
+==========================
+allow a bonded device to be added to another bonded device, which is
+supported by 
+ - balance-rr 0
+ - active-backup 1
+ - balance-xor 2
+ - broadcast 3
+ - balance-tlb 5
+
+There's two limitations to create master bonding:
+
+ - Total depth of nesting is limited to two levels
+ - 802.3ad mode is not supported if one or more slaves is a bond device
+
+*. add the same device twice to check exceptional process is ok
+*. add the second level bonded device to another new bonded device
+   check exceptional message occurs
+*. stacked bonded is forbidden on mode 4
+*. master bonding/each slaves queue configuration is the same
+
+steps:
+*. bind two ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add one slave, check bond 2 config status
+
+    testpmd> port stop all
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 0 2
+    testpmd> show bonding config 2
+
+*. create second bonded device and add one slave, check bond 3 config status
+
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 1 3
+    testpmd> show bonding config 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful. stacked bonded is forbidden by mode 4,
+    mode 4 will fail to add a bonded device as its' slave
+
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : active-backup stacked bonded rx traffic
+===================================================
+set dut/testpmd on stacked bonded status. send tcp packet by scapy and check 
+packet statistics.
+
+steps:
+*. bind two ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add one port as slave
+
+    testpmd> port stop all
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 2
+
+*. create second bonded device and add one ports as slave
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 1 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful.
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 1
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : active-backup stacked bonded rx traffic with slave down
+===================================================================
+set dut/testpmd on stacked bonded status. set one slave of 1st level bonded 
+device to down status.send tcp packet by scapy and check packet statistics. 
+
+steps:
+*. bind four ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2> \
+                                               <pci address 3> <pci address 4>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add two ports as slaves, set port 1 down
+
+    testpmd> port stop all
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+
+*. create second bonded device and add two ports as slaves, set port 3 down
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 2 5
+    testpmd> add bonding slave 3 5
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful.
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 4 6
+    testpmd> add bonding slave 5 6
+    testpmd> show bonding config 6
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 2
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : balance-xor stacked bonded rx traffic
+=================================================
+set dut/testpmd on stacked bonded status. send tcp packet by scapy and check 
+packet statistics.
+
+steps:
+*. bind two ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
+
+*. boot up testpmd, stop all port
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+    testpmd> port stop all
+
+*. create first bonded device and add one port as slaves
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 0 2
+
+*. create second bonded device and add one port as slaves
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 1 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful. 
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 1
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+    testpmd> show port stats all
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : balance-xor stacked bonded rx traffic with slave down
+=================================================================
+set dut/testpmd on stacked bonded status. set one slave of 1st level bonded 
+device to down status.send tcp packet by scapy and check packet statistics.
+
+steps:
+*. bind four ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2> \
+                                               <pci address 3> <pci address 4>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add two ports as slaves, set port 1 down
+
+    testpmd> port stop all
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+    testpmd> port stop 1
+
+*. create second bonded device and add two ports as slaves, set port 3 down
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 2 5
+    testpmd> add bonding slave 3 5
+    testpmd> port stop 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful. 
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 4 6
+    testpmd> add bonding slave 5 6
+    testpmd> show bonding config 6
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 2
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+    testpmd> show port stats all
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
-- 
1.9.3

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V1 2/2] pmd_stacked_bonded: upload automation script
  2018-06-06  5:37 [dts] [PATCH V1 0/2] pmd_stacked_bonded: upload test plan and automation script yufengx.mo
  2018-06-06  5:37 ` [dts] [PATCH V1 1/2] pmd_stacked_bonded: upload test plan yufengx.mo
@ 2018-06-06  5:37 ` yufengx.mo
  2019-01-21  7:17   ` Chen, Zhaoyan
  1 sibling, 1 reply; 4+ messages in thread
From: yufengx.mo @ 2018-06-06  5:37 UTC (permalink / raw)
  To: dts; +Cc: yufengmx

From: yufengmx <yufengx.mo@intel.com>


This automation script is for pmd stacked bonded feature.

Allow bonded devices to be stacked to allow two or more bonded devices to be
bonded into one master bonded device

Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
 tests/TestSuite_pmd_stacked_bonded.py | 1593 +++++++++++++++++++++++++++++++++
 1 file changed, 1593 insertions(+)
 create mode 100644 tests/TestSuite_pmd_stacked_bonded.py

diff --git a/tests/TestSuite_pmd_stacked_bonded.py b/tests/TestSuite_pmd_stacked_bonded.py
new file mode 100644
index 0000000..4596c58
--- /dev/null
+++ b/tests/TestSuite_pmd_stacked_bonded.py
@@ -0,0 +1,1593 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2018 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import traceback
+import os
+import time
+import re
+import random
+from socket import htons, htonl
+
+import utils
+from test_case import TestCase
+from exception import TimeoutException, VerifyFailure
+from settings import TIMEOUT
+from pmd_output import PmdOutput
+
+SOCKET_0 = 0
+SOCKET_1 = 1
+
+MODE_ROUND_ROBIN = 0
+MODE_ACTIVE_BACKUP = 1
+MODE_XOR_BALANCE = 2
+MODE_BROADCAST = 3
+MODE_LACP = 4
+MODE_TLB_BALANCE = 5
+MODE_ALB_BALANCE = 6
+
+FRAME_SIZE_64 = 64
+
+class TestBondingStacked(TestCase):
+
+    def get_static_ip_configs(self):
+        S_MAC_IP_PORT = [('52:00:00:00:00:00', '10.239.129.65', 61),
+                         ('52:00:00:00:00:01', '10.239.129.66', 62),
+                         ('52:00:00:00:00:02', '10.239.129.67', 63)]
+        return S_MAC_IP_PORT
+    #
+    # On tester platform, packet transmission
+    #
+    def get_stats(self, portid, flow):
+        """
+        get testpmd port statistic
+        """
+        _portid = int(portid) if isinstance(portid, (str, unicode)) else portid
+        info = self.testpmd.get_pmd_stats(_portid)
+        _kwd = ["-packets", "-missed", "-bytes"]
+        kwd = map(lambda x: flow.upper() + x, _kwd) 
+        result = [int(info[item]) for item in kwd]
+
+        return result
+
+    def get_all_stats(self, unbound_port, rx_tx, bond_port, **slaves):
+        """
+        Get all the port stats which the testpmd can display.
+        Parameters:
+        : unbound_port: pmd port id
+        : rx_tx: unbond port stat 'rx' or 'tx'
+        : bond_port: bonding port
+        """
+        pkt_now = {}
+
+        if rx_tx == 'rx':
+            bond_stat = 'tx'
+        else:
+            bond_stat = 'rx'
+
+        if unbound_port: # if unbound_port has not been set, ignore this
+            pkt_now[unbound_port] = [int(_) for _ in self.get_stats(
+                                                            unbound_port, 
+                                                            rx_tx)]
+
+        pkt_now[bond_port] = [int(_) for _ in self.get_stats(bond_port,
+                                                             bond_stat)]
+        for slave in slaves['active']:
+            pkt_now[slave] = [int(_) for _ in self.get_stats(slave, bond_stat)]
+        for slave in slaves['inactive']:
+            pkt_now[slave] = [int(_) for _ in self.get_stats(slave, bond_stat)]
+
+        return pkt_now
+
+    def parse_ether_ip(self, dst_port, **ether_ip):
+        """
+        ether_ip:
+            'ether':
+                   'dst_mac':False
+                   'src_mac':"52:00:00:00:00:00"
+            'dot1q': 'vlan':1
+            'ip':  'dst_ip':"10.239.129.88"
+                   'src_ip':"10.239.129.65"
+            'udp': 'dst_port':53
+                   'src_port':53
+        """
+        ret_ether_ip = {}
+        ether = {}
+        dot1q = {}
+        ip = {}
+        udp = {}
+
+        try:
+            dut_dst_port = self.dut_ports[dst_port]
+        except Exception, e:
+            dut_dst_port = dst_port
+
+        if not ether_ip.get('ether'):
+            ether['dst_mac'] = self.dut.get_mac_address(dut_dst_port)
+            ether['src_mac'] = "52:00:00:00:00:00"
+        else:
+            if not ether_ip['ether'].get('dst_mac'):
+                ether['dst_mac'] = self.dut.get_mac_address(dut_dst_port)
+            else:
+                ether['dst_mac'] = ether_ip['ether']['dst_mac']
+            if not ether_ip['ether'].get('src_mac'):
+                ether['src_mac'] = "52:00:00:00:00:00"
+            else:
+                ether['src_mac'] = ether_ip["ether"]["src_mac"]
+
+        if not ether_ip.get('dot1q'):
+            pass
+        else:
+            if not ether_ip['dot1q'].get('vlan'):
+                dot1q['vlan'] = '1'
+            else:
+                dot1q['vlan'] = ether_ip['dot1q']['vlan']
+
+        if not ether_ip.get('ip'):
+            ip['dst_ip'] = "10.239.129.88"
+            ip['src_ip'] = "10.239.129.65"
+        else:
+            if not ether_ip['ip'].get('dst_ip'):
+                ip['dst_ip'] = "10.239.129.88"
+            else:
+                ip['dst_ip'] = ether_ip['ip']['dst_ip']
+            if not ether_ip['ip'].get('src_ip'):
+                ip['src_ip'] = "10.239.129.65"
+            else:
+                ip['src_ip'] = ether_ip['ip']['src_ip']
+
+        if not ether_ip.get('udp'):
+            udp['dst_port'] = 53
+            udp['src_port'] = 53
+        else:
+            if not ether_ip['udp'].get('dst_port'):
+                udp['dst_port'] = 53
+            else:
+                udp['dst_port'] = ether_ip['udp']['dst_port']
+            if not ether_ip['udp'].get('src_port'):
+                udp['src_port'] = 53
+            else:
+                udp['src_port'] = ether_ip['udp']['src_port']
+
+        ret_ether_ip['ether'] = ether
+        ret_ether_ip['dot1q'] = dot1q
+        ret_ether_ip['ip'] = ip
+        ret_ether_ip['udp'] = udp
+
+        return ret_ether_ip
+
+    def config_tester_port(self, port_name, status):
+        """
+        Do some operations to the network interface port, 
+        such as "up" or "down".
+        """
+        if self.tester.get_os_type() == 'freebsd':
+            self.tester.admin_ports(port_name, status)
+        else:
+            eth = self.tester.get_interface(port_name)
+            self.tester.admin_ports_linux(eth, status)
+        time.sleep(5)
+
+    def config_tester_port_by_number(self, number, status):
+        # stop slave link by force 
+        cmds = [["port stop %d"%number, '']]
+        self.execute_testpmd_cmd(cmds)
+        # stop peer port on tester
+        port_name = self.tester.get_local_port(self.dut_ports[number])
+        self.config_tester_port( port_name, status)
+        time.sleep(5)
+        cur_status = self.get_port_info(number, 'link_status')
+        self.logger.info("port {0} is [{1}]".format(number, cur_status))
+        if cur_status != status:
+            self.logger.warning("expected status is [{0}]".format(status))
+    
+    def send_packet(self, dst_port, src_port=False, frame_size=FRAME_SIZE_64,
+                    count=1, invert_verify=False, **ether_ip):
+        """
+        Send count packet to portid
+        count: 1 or 2 or 3 or ... or 'MANY'
+               if count is 'MANY', then set count=1000,
+               send packets during 5 seconds.
+        ether_ip:
+            'ether': 'dst_mac':False
+                     'src_mac':"52:00:00:00:00:00"
+            'dot1q': 'vlan':1
+            'ip':   'dst_ip':"10.239.129.88"
+                    'src_ip':"10.239.129.65"
+            'udp':  'dst_port':53
+                    'src_port':53
+        """
+        during = 0
+        loop = 0
+
+        try:
+            count = int(count)
+        except ValueError as e:
+            if count == 'MANY':
+                during = 5
+                count = 1000
+            else:
+                raise e
+
+        if not src_port:
+            gp0rx_pkts, gp0rx_err, gp0rx_bytes = \
+                [int(_) for _ in self.get_stats(self.dut_ports[dst_port],
+                                                "rx")]
+            itf = self.tester.get_interface(
+                        self.tester.get_local_port(self.dut_ports[dst_port]))
+            os.system("ifconfig {0} up".format(itf))
+            # temp = os.system("ifconfig {0} up".format(itf))
+        else:
+            gp0rx_pkts, gp0rx_err, gp0rx_bytes = \
+                [int(_) for _ in self.get_stats(dst_port, "rx")]
+            itf = src_port
+
+        time.sleep(2)
+        ret_ether_ip = self.parse_ether_ip(dst_port, **ether_ip)
+
+        pktlen = frame_size - 18
+        padding = pktlen - 20
+
+        start = time.time()
+        while True:
+            self.tester.scapy_foreground()
+            append = self.tester.scapy_append
+            append('nutmac="%s"' % ret_ether_ip['ether']['dst_mac'])
+            append('srcmac="%s"' % ret_ether_ip['ether']['src_mac'])
+
+            if ether_ip.get('dot1q'):
+                append('vlanvalue=%d' % ret_ether_ip['dot1q']['vlan'])
+            append('destip="%s"' % ret_ether_ip['ip']['dst_ip'])
+            append('srcip="%s"' % ret_ether_ip['ip']['src_ip'])
+            append('destport=%d' % ret_ether_ip['udp']['dst_port'])
+            append('srcport=%d' % ret_ether_ip['udp']['src_port'])
+            if not ret_ether_ip.get('dot1q'):
+                packet = "/".join(["Ether(dst=nutmac, src=srcmac)", 
+                                  "IP(dst=destip, src=srcip, len=%s)", 
+                                  "UDP(sport=srcport, dport=destport)",
+                                  "Raw(load='\x50'*%s)"])
+                cmd = 'sendp([{0}], iface="%s", count=%d)'.format(packet)
+                append(cmd % (pktlen, padding, itf, count))
+            else:
+                packet = "/".join(["Ether(dst=nutmac, src=srcmac)",
+                                   "Dot1Q(vlan=vlanvalue)", 
+                                  "IP(dst=destip, src=srcip, len=%s)", 
+                                  "UDP(sport=srcport, dport=destport)",
+                                  "Raw(load='\x50'*%s)"])
+                cmd = 'sendp([{0}], iface="%s", count=%d)'.format(packet)
+                append(cmd % (pktlen, padding, itf, count))
+
+            self.tester.scapy_execute()
+            loop += 1
+
+            now = time.time()
+            if (now - start) >= during:
+                break
+        time.sleep(.5)
+
+        if not src_port:
+            p0rx_pkts, p0rx_err, p0rx_bytes = \
+               [int(_) for _ in self.get_stats(self.dut_ports[dst_port], "rx")]
+        else:
+            p0rx_pkts, p0rx_err, p0rx_bytes = \
+                [int(_) for _ in self.get_stats(dst_port, "rx")]
+
+        p0rx_pkts -= gp0rx_pkts
+        p0rx_bytes -= gp0rx_bytes
+
+        if invert_verify:
+            LACP_MESSAGE_SIZE = 128
+            msg = ("port <{0}> Data received by port <{1}>, "
+                   "but should not.").format(itf, dst_port)
+            self.verify(p0rx_pkts == 0 or
+                        p0rx_bytes / p0rx_pkts == LACP_MESSAGE_SIZE,
+                        msg)
+            msg = "port {0}  <-|  |-> port {1} is ok".format(itf,
+                                                            dst_port)
+            self.logger.info(msg)
+        else:
+            msg = "port <{0}> Data not received by port <{1}>".format(itf,
+                                                                   dst_port)
+            self.verify(p0rx_pkts >= count * loop,
+                        msg)
+            msg = "port {0}  <----> port {1} is ok".format( itf, dst_port)
+            self.logger.info(msg)
+        return count * loop
+
+    def send_default_packet_to_slave(self, unbound_port, bond_port, 
+                                     pkt_count=100, **slaves):
+        """
+        Send packets to the slaves and calculate the slave`s RX packets
+        and unbond port TX packets.
+        Parameters:
+        : unbound_port: the unbonded port id
+        : bond_port: the bonded device port id
+        """
+        pkt_orig = {}
+        pkt_now = {}
+        temp_count = 0
+        summary = 0
+        results = []
+        #---------------------------
+        # send to slave ports
+        pkt_orig = self.get_all_stats(unbound_port, 'tx', bond_port, **slaves)
+        self.logger.info("send packet to active slave ports")
+        for slave in slaves['active']:
+            try:
+                temp_count = self.send_packet(self.dut_ports[int(slave)],
+                                              False,
+                                              FRAME_SIZE_64, pkt_count)
+                summary += temp_count
+            except Exception as e:
+                results.append(e)
+            finally:
+                pass
+        #---------------------------
+        if slaves['inactive'] and False:
+            self.logger.info("send packet to inactive slave ports")
+            for slave in slaves['inactive']:
+                try:
+                    self.send_packet(self.dut_ports[int(slave)], False, 
+                                     FRAME_SIZE_64, pkt_count, True)
+                except Exception as e:
+                    results.append(e)
+                finally:
+                    pass
+
+        if results:
+            for item in results:
+                self.logger.error(e)
+            raise VerifyFailure("send_default_packet_to_slave failed")
+        
+        pkt_now = self.get_all_stats(unbound_port, 'tx', bond_port, **slaves)
+
+        for key in pkt_now:
+            for num in [0, 1, 2]:
+                pkt_now[key][num] -= pkt_orig[key][num]
+
+        return pkt_now, summary
+
+    def send_default_packet_to_unbound_port(self, unbound_port, bond_port, 
+                                            pkt_count=300, **slaves):
+        """
+        Send packets to the unbound port and calculate unbound port RX packets
+        and the slave`s TX packets.
+        Parameters:
+        : unbound_port: the unbonded port id
+        : bond_port: the bonded device port id
+        """
+        pkt_orig = {}
+        pkt_now = {}
+        summary = 0
+
+        # send to unbonded device
+        pkt_orig = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
+        summary = self.send_packet(unbound_port, False,
+                                   FRAME_SIZE_64, pkt_count)
+        pkt_now = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
+
+        for key in pkt_now:
+            for num in [0, 1, 2]:
+                pkt_now[key][num] -= pkt_orig[key][num]
+
+        return pkt_now, summary
+
+    def send_customized_packet_to_unbound_port(self, unbound_port, bond_port,
+                                               policy, vlan_tag=False, 
+                                               pkt_count=100, **slaves):
+        """
+        Verify that transmitting the packets correctly in the XOR mode.
+        Parameters:
+        : unbound_port: the unbonded port id
+        : bond_port: the bonded device port id
+        : policy: 'L2' , 'L23' or 'L34'
+        : vlan_tag: False or True
+        """
+        pkt_orig = {}
+        pkt_now = {}
+        summary = 0
+        temp_count = 0
+
+        # send to unbound_port
+        pkt_orig = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
+        dest_mac = self.dut.get_mac_address(self.dut_ports[unbound_port])
+        dest_ip = "10.239.129.88"
+        dest_port = 53
+
+        self.dst_pkt_configs = [dest_mac, dest_ip, dest_port]
+
+        ether_ip = {}
+        ether = {}
+        ip = {}
+        udp = {}
+
+        ether['dest_mac'] = False
+        ip['dest_ip'] = dest_ip
+        udp['dest_port'] = 53
+        if vlan_tag:
+            dot1q = {}
+            dot1q['vlan'] = random.randint(1, 50)
+            ether_ip['dot1q'] = dot1q
+        
+        ether_ip['ether'] = ether
+        ether_ip['ip'] = ip
+        ether_ip['udp'] = udp
+
+        source = self.get_static_ip_configs()
+
+        for src_mac, src_ip, src_port in source:
+            ether_ip['ether']['src_mac'] = src_mac
+            ether_ip['ip']['src_ip'] = src_ip
+            ether_ip['udp']['src_port'] = src_port
+            temp_count = self.send_packet(unbound_port, False, FRAME_SIZE_64, 
+                                          pkt_count, False, **ether_ip)
+            summary += temp_count
+        pkt_now = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
+        
+        for key in pkt_now:
+            for num in [0, 1, 2]:
+                pkt_now[key][num] -= pkt_orig[key][num]
+
+        return pkt_now, summary
+
+    #
+    # On dut, dpdk testpmd
+    #
+    def preset_testpmd(self, core_mask, options=''):
+        self.testpmd.start_testpmd( core_mask, param=' '.join(options))
+        self.execute_testpmd_cmd(self.preset_testpmd_cmds)
+        self.preset_testpmd_cmds = list()
+        time.sleep(1)
+
+    def execute_testpmd_cmd(self, cmds):
+        if len(cmds) == 0:
+            return
+        for item in cmds:
+            expected_str = item[1] or 'testpmd> '
+            if len(item) == 3:
+                self.testpmd.execute_cmd(item[0], item[1], int(item[2]))
+            else:
+                self.testpmd.execute_cmd(item[0], item[1])
+        time.sleep(2)
+
+    def start_testpmd(self, eal_option=None):
+        if self.testpmd_status == 'running':
+            return
+        # link eal option and testpmd options
+        offloadd = '0x1fbf' if self.driver == 'i40e' else '0x2203f'
+        options = ["--tx-offloads={0}".format(offloadd)]
+#         options = " ".join([eal_option, options]) if eal_option else ''
+        # boot up testpmd
+        hw_mask = '1S/2C/1T'
+        self.preset_testpmd_cmds = [[' ', ''], # used to resolve lsc event
+                                    ['port stop all', '']]
+        self.preset_testpmd(hw_mask, options)
+        self.testpmd_status = 'running'
+
+    def stop_testpmd(self):
+        time.sleep(1)
+        testpmd_cmds =[['port stop all', ''],
+                       ['stop', ''],]
+        self.execute_testpmd_cmd(testpmd_cmds)
+        time.sleep(1)
+
+    def close_testpmd(self):
+        if self.testpmd_status == 'close':
+            return
+        self.stop_testpmd()
+        time.sleep(1)
+        self.testpmd.quit()
+        self.testpmd_status = 'close'
+    # 
+    # On dut, dpdk bonding
+    #
+    def get_value_from_str(self, key_str, regx_str, string):
+        """
+        Get some values from the given string by the regular expression.
+        """
+        if isinstance(key_str, (unicode, str)):
+            pattern = r"(?<=%s)%s" % (key_str, regx_str)
+            s = re.compile(pattern)
+            res = s.search(string)
+            if type(res).__name__ == 'NoneType':
+                msg = "{0} hasn't match anything".format(key_str)
+                self.logger.warning(msg)
+                return ' '
+            else:
+                return res.group(0)
+        elif isinstance(key_str, (list, tuple)):
+            for key in key_str:
+                pattern = r"(?<=%s)%s" % (key, regx_str)
+                s = re.compile(pattern)
+                res = s.search(string)
+                if type(res).__name__ != 'NoneType':
+                    return res.group(0)
+            else:
+                self.logger.warning("all key_str hasn't match anything")
+                return ' '
+
+    def _get_detail_from_port_info(self, port_id, args):
+        """
+        Get the detail info from the output of pmd cmd 
+            'show port info <port num>'.
+        """
+        port = port_id
+        key_str, regx_str = args
+        out = self.dut.send_expect("show port info %d" % port, "testpmd> ")
+        find_value = self.get_value_from_str(key_str, regx_str, out)
+        return find_value
+
+    def get_detail_from_port_info(self, port_id, args):
+        if isinstance(args[0], (list, tuple)):
+            return [self._get_detail_from_port_info(port_id, sub_args)
+                        for sub_args in args]
+        else:
+            return self._get_detail_from_port_info(port_id, args)
+
+    def get_port_info(self, port_id, info_type):
+        '''
+        Get the specified port information by its output message format
+        '''
+        info_set = {
+        'mac':              ["MAC address: ", "([0-9A-F]{2}:){5}[0-9A-F]{2}"],
+        'connect_socket':   ["Connect to socket: ", "\d+"],
+        'memory_socket':    ["memory allocation on the socket: ", "\d+"],
+        'link_status':      ["Link status: ", "\S+"],
+        'link_speed':       ["Link speed: ", "\d+"],
+        'link_duplex':      ["Link duplex: ", "\S+"],
+        'promiscuous_mode': ["Promiscuous mode: ", "\S+"],
+        'allmulticast_mode':["Allmulticast mode: ", "\S+"],
+        'vlan_offload':     [
+                        ["strip ", "\S+"],
+                        ['filter', "\S+"],
+                        ['qinq\(extend\) ', "\S+"]],
+        'queue_config':     [
+                        ["Max possible RX queues: ", "\d+"],
+                        ['Max possible number of RXDs per queue: ', "\d+"],
+                        ['Min possible number of RXDs per queue: ', "\d+"],
+                        ["Max possible TX queues: ", "\d+"],
+                        ['Max possible number of TXDs per queue: ', "\d+"],
+                        ['Min possible number of TXDs per queue: ', "\d+"],]
+        }
+
+        if info_type in info_set.keys():
+            return self.get_detail_from_port_info(port_id, info_set[info_type])
+        else:
+            return None
+
+    def get_bonding_config(self, config_content, args):
+        """
+        Get info by executing the command "show bonding config".
+        """
+        key_str, regx_str = args
+        find_value = self.get_value_from_str(key_str, regx_str, config_content)
+        return find_value
+
+    def get_info_from_bond_config(self, config_content, args):
+        """
+        Get the active slaves of the bonding device which you choose.
+        """
+        search_args = args if isinstance(args[0], (list, tuple)) else [args]
+        for search_args in search_args:
+            try:
+                info = self.get_bonding_config(config_content, search_args)
+                break
+            except Exception as e:
+                self.logger.info(e)
+            finally:
+                pass
+        else:
+            info = None
+
+        return info
+
+    def get_bonding_info(self, bond_port, info_types):
+        '''
+        Get the specified port information by its output message format
+        '''
+        info_set = {
+            'mode':          ["Bonding mode: ", "\d*"],
+            'balance_policy':["Balance Xmit Policy: ", "\S+"],
+            'slaves':        [["Slaves \(\d\): \[", "\d*( \d*)*"],
+                              ["Slaves: \[", "\d*( \d*)*"]],
+            'active_slaves': [["Active Slaves \(\d\): \[", "\d*( \d*)*"],
+                              ["Acitve Slaves: \[", "\d*( \d*)*"]],
+            'primary':       ["Primary: \[", "\d*"]}
+        # get all config information
+        config_content = self.dut.send_expect(
+                                    "show bonding config %d" % bond_port,
+                                    "testpmd> ")
+        if isinstance(info_types, (list or tuple)):
+            query_values = []
+            for info_type in info_types:
+                if info_type in info_set.keys():
+                    find_value = self.get_info_from_bond_config(
+                                                        config_content,
+                                                        info_set[info_type])
+                    if info_type in ['active_slaves', 'slaves']:
+                        find_value = [value for value in find_value.split(' ')
+                                            if value]
+                else:
+                    find_value = None
+                query_values.append(find_value)
+            return query_values
+        else:
+            info_type = info_types
+            if info_type in info_set.keys():
+                find_value = self.get_info_from_bond_config(
+                                                        config_content,
+                                                        info_set[info_type])
+                if info_type in ['active_slaves', 'slaves']:
+                    find_value = [value for value in find_value.split(' ') 
+                                        if value]
+                return find_value
+            else:
+                return None
+
+    def get_active_slaves(self, primary_slave, bond_port):
+        self.config_tester_port_by_number(primary_slave, "down")
+        primary_port = self.get_bonding_info(bond_port, 'primary')
+        active_slaves = self.get_bonding_info(bond_port, 'active_slaves')
+        if active_slaves and primary_port in active_slaves:
+            active_slaves.remove(primary_port)
+        else:
+            msg = "primary port <{0}> isn't in active slaves list".format(
+                                                                primary_port)
+            self.logger.error
+            raise VerifyFailure(msg)
+
+        return primary_port, active_slaves
+
+    def create_bonded_device(self, mode=0, socket=0, verify_detail=False):
+        """
+        Create a bonding device with the parameters you specified.
+        """
+        cmd = "create bonded device %d %d" % (mode, socket)
+        out = self.dut.send_expect(cmd, "testpmd> ")
+        self.verify("Created new bonded device" in out,
+                    "Create bonded device on mode [%d] socket [%d] failed" % (
+                                                                    mode,
+                                                                    socket))
+        bond_port = self.get_value_from_str([
+            "Created new bonded device net_bond_testpmd_[\d] on \(port ",
+            "Created new bonded device net_bonding_testpmd_[\d] on \(port "],
+            "\d+", out)
+        bond_port = int(bond_port)
+
+        if verify_detail:
+            out = self.dut.send_expect("show bonding config %d" % bond_port,
+                                       "testpmd> ")
+            self.verify("Bonding mode: %d" % mode in out,
+                        "Bonding mode display error when create bonded device")
+            self.verify("Slaves: []" in out,
+                        "Slaves display error when create bonded device")
+            self.verify("Active Slaves: []" in out,
+                    "Active Slaves display error when create bonded device")
+            self.verify("Primary: []" not in out,
+                        "Primary display error when create bonded device")
+
+            out = self.dut.send_expect("show port info %d" % bond_port,
+                                       "testpmd> ")
+            self.verify("Connect to socket: %d" % socket in out,
+                        "Bonding port connect socket error")
+            self.verify("Link status: down" in out,
+                        "Bonding port default link status error")
+            self.verify("Link speed: 0 Mbps" in out,
+                        "Bonding port default link speed error")
+
+        return bond_port
+
+    def start_ports(self, port='all'):
+        """
+        Start a port which the testpmd can see.
+        """
+        timeout = 12 if port=='all' else 5
+        # to avoid lsc event message interfere normal status
+        #self.dut.send_expect("port start %s" % str(port), " ", timeout)
+        self.dut.send_expect("port start %s" % str(port), " ", timeout)
+        self.dut.send_expect(" ", "testpmd> ", timeout)
+
+    def add_slave(self, bond_port, invert_verify=False,
+                  expected_str='', *slave_ports):
+        """
+        Add the ports into the bonding device as slaves.
+        """
+        if len(slave_ports) <= 0:
+            utils.RED("No port exist when add slave to bonded device")
+        for slave_id in slave_ports:
+            out = self.dut.send_expect(
+                        "add bonding slave %d %d" % (slave_id, bond_port),
+                        "testpmd> ")
+            if expected_str:
+                self.verify(expected_str in out,
+                            "message <{0}> is missiong".format(expected_str))
+            slaves = self.get_bonding_info(bond_port, 'slaves')
+            if not invert_verify:
+                self.verify(str(slave_id) in slaves,
+                            "Add port as bonding slave failed")
+            else:
+                self.verify(str(slave_id) not in slaves,
+                        "Add port as bonding slave successfully,should fail")
+
+    def remove_slaves(self, bond_port, invert_verify=False, *slave_port):
+        """
+        Remove the specified slave port from the bonding device.
+        """
+        if len(slave_port) <= 0:
+            utils.RED("No port exist when remove slave from bonded device")
+        for slave_id in slave_port:
+            cmd = "remove bonding slave %d %d" % (int(slave_id), bond_port)
+            self.dut.send_expect(cmd, "testpmd> ")
+            slaves = self.get_bonding_info(bond_port, 'slaves')
+            if not invert_verify:
+                self.verify(str(slave_id) not in slaves,
+                            "Remove slave to fail from bonding device")
+            else:
+                self.verify(str(slave_id) in slaves,
+                            ("Remove slave successfully from "
+                             "bonding device,should be failed"))
+
+    def remove_all_slaves(self, bond_port):
+        """
+        Remove all slaves of specified bound device.
+        """
+        all_slaves = self.get_bonding_info(bond_port, 'slaves')
+        all_slaves = all_slaves.split()
+        if len(all_slaves) == 0:
+            pass
+        else:
+            self.remove_slaves(bond_port, False, *all_slaves)
+
+    def set_primary_slave(self, bond_port, slave_port, invert_verify=False):
+        """
+        Set the primary slave for the bonding device.
+        """
+        cmd = "set bonding primary %d %d" % (slave_port, bond_port)
+        self.dut.send_expect(cmd, "testpmd> ")
+        out = self.get_bonding_info(bond_port, 'primary')
+        if not invert_verify:
+            self.verify(str(slave_port) in out,
+                        "Set bonding primary port failed")
+        else:
+            self.verify(str(slave_port) not in out,
+                        ("Set bonding primary port successfully, "
+                         "should not success"))
+
+    def set_bonding_mode(self, bond_port, mode):
+        """
+        Set the mode for the bonding device.
+        """
+        cmd = "set bonding mode %d %d" % (mode, bond_port)
+        self.dut.send_expect(cmd, "testpmd> ")
+        mode_value = self.get_bonding_info(bond_port, 'mode')
+        self.verify(str(mode) in mode_value, "Set bonding mode failed")
+
+    def set_bonding_mac(self, bond_port, mac):
+        """
+        Set the MAC for the bonding device.
+        """
+        cmd = "set bonding mac_addr %s %s" % (bond_port, mac)
+        self.dut.send_expect(cmd, "testpmd> ")
+        new_mac = self.get_port_mac(bond_port)
+        self.verify(new_mac == mac, "Set bonding mac failed")
+
+    def set_bonding_balance_policy(self, bond_port, policy):
+        """
+        Set the balance transmit policy for the bonding device.
+        """
+        cmd = "set bonding balance_xmit_policy %d %s" % (bond_port, policy)
+        self.dut.send_expect(cmd, "testpmd> ")
+        new_policy = self.get_bonding_info(bond_port, 'balance_policy')
+        policy = "BALANCE_XMIT_POLICY_LAYER" + policy.lstrip('l')
+        self.verify(new_policy == policy, "Set bonding balance policy failed")
+
+    def check_stacked_device_queue_config(self, *devices):
+        '''
+        # check if master bonding/each slaves queue configuration is the same.
+        '''
+        master = self.get_port_info(devices[0], 'queue_config')
+        for port_id in devices[1:]:
+            config = self.get_port_info(port_id, 'queue_config')
+            if cmp(config, master) == 0:
+                continue
+            msg = ("slave bonded port [{0}] is "
+                   "different to top bonded port [{1}]").format(port_id,
+                                                                devices[0])
+            raise VerifyFailure('check_stacked_device_queue_config ' + msg)
+
+    def set_stacked_bonded(self, slaveGrpOne, slaveGrpTwo, bond_mode):
+        '''
+        set stacked bonded mode for the specified bonding mode
+        '''
+        specified_socket = SOCKET_0
+        # create bonded device 1, add slaves in it
+        bond_port_1 = self.create_bonded_device(bond_mode, specified_socket)
+        self.add_slave(bond_port_1, False, '', *slaveGrpOne)
+        # create bonded device 2, add slaves in it
+        bond_port_2 = self.create_bonded_device(bond_mode, specified_socket)
+        self.add_slave(bond_port_2, False, '', *slaveGrpTwo)
+        # create bonded device 3, which is the top bonded device
+        bond_port_master = self.create_bonded_device(bond_mode,
+                                                     specified_socket)
+        # bond bonded device 2 to bonded device 4 
+        # and check bond 0/2 config status
+        self.add_slave(bond_port_master, False, '', *[bond_port_1])
+        # bond bonded device 3 to bonded device 4 
+        # and check bond 1/2 config status
+        self.add_slave(bond_port_master, False, '', *[bond_port_2])
+        # check if master bonding/each slaves queue configuration is the same.
+        self.check_stacked_device_queue_config(*[bond_port_master,
+                                                 bond_port_1, bond_port_2])
+        # start all ports
+        #self.start_ports()
+
+        return [bond_port_1, bond_port_2, bond_port_master]
+
+    def set_third_stacked_bonded(self, bond_port, bond_mode):
+        '''
+        set third level stacked bonded to check stacked limitation
+        '''
+        specified_socket = 0
+        third_bond_port = self.create_bonded_device(bond_mode, 
+                                                    specified_socket)
+        expected_str = 'Too many levels of bonding'
+        try:
+            self.add_slave(third_bond_port, True, expected_str, *[bond_port])
+        except Exception as e:
+            self.logger.warning(e)
+        finally:
+            pass
+
+    def duplicate_add_stacked_bonded(self, bond_port_1, bond_port_2,
+                                     bond_port_master):
+        '''
+        check if adding stacked bonded duplicately is forbidden
+        '''
+        # check exception process
+        expected_str = 'Slave device is already a slave of a bonded device'
+        # bond bonded device 2 to bonded device 4
+        # and check bond 0/2 config status
+        self.add_slave(bond_port_master, False, expected_str, *[bond_port_1])
+        # bond bonded device 3 to bonded device 4 
+        # and check bond 1/2 config status
+        self.add_slave(bond_port_master, False, expected_str, *[bond_port_2])
+    
+    def preset_stacked_bonded(self, slaveGrpOne, slaveGrpTwo, bond_mode):
+        bond_port_1, bond_port_2, bond_port_master = self.set_stacked_bonded(
+                                                                slaveGrpOne,
+                                                                slaveGrpTwo,
+                                                                bond_mode)
+        # set test pmd
+        cmds = []
+        cmds = [["port stop all", '']]
+        portList = [slaveGrpOne[0],
+                    slaveGrpTwo[0],
+                    bond_port_1,
+                    bond_port_2,
+                    bond_port_master]
+        cmds += [["set portlist " + ",".join([str(port)
+                                              for port in portList]), '']]
+        cmds +=[["port start all", ' ', 15],
+                ["start", '']]
+        self.execute_testpmd_cmd(cmds)
+        time.sleep(5)
+
+        return bond_port_1, bond_port_2, bond_port_master
+
+    def preset_normal_bonded(self, bond_mode):
+        '''
+        '''
+        slaveGrpOne = self.slaveGrpOne
+        slaveGrpTwo = self.slaveGrpTwo
+        
+        bond_port_1, bond_port_2, bond_port_master = self.set_stacked_bonded(
+                                                                 slaveGrpOne,
+                                                                 slaveGrpTwo,
+                                                                 bond_mode)
+        # set test pmd
+        cmds = [] 
+        cmds += [["port stop all", '']]
+        portList = [slaveGrpOne[0],
+                    slaveGrpTwo[0],
+                    bond_port_1,
+                    bond_port_2,
+                    bond_port_master]
+        cmds += [["set portlist " + ",".join([str(port) for port in portList]),
+                  '']]
+        cmds +=[["port start all", ' ', 15],
+                ["start", '']]
+        self.execute_testpmd_cmd(cmds)
+        time.sleep(5)
+
+    def check_packet_transmission(self):
+        pkt_count = 100
+        #  send 100 packet to bonded device 2 and check bond 2/4 statistics
+        pkt_now, summary = self.send_default_packet_to_slave(
+                                                unbound_port, bond_port_master,
+                                                pkt_count=pkt_count, **slaves)
+
+        #  send 100 packet to bonded device 3 and check bond 3/4 statistics
+        pkt_now, summary = self.send_default_packet_to_slave(
+                                                unbound_port, bond_port_master,
+                                                pkt_count=pkt_count, **slaves)
+
+    def convert_mac_str_into_int(self, mac_str):
+        """
+        Translate the MAC type from the string into the int.
+        """
+        mac_hex = '0x'
+        for mac_part in mac_str.split(':'):
+            mac_hex += mac_part
+        return int(mac_hex, 16)
+
+    def mac_hash(self, dst_mac, src_mac):
+        """
+        Generate the hash value with the source and destination MAC.
+        """
+        dst_port_mac = self.convert_mac_str_into_int(dst_mac)
+        src_port_mac = self.convert_mac_str_into_int(src_mac)
+        src_xor_dest = dst_port_mac ^ src_port_mac
+        xor_value_1 = src_xor_dest >> 32
+        xor_value_2 = (src_xor_dest >> 16) ^ (xor_value_1 << 16)
+        xor_value_3 = src_xor_dest ^ (xor_value_1 << 32) ^ (xor_value_2 << 16)
+        return htons(xor_value_1 ^ xor_value_2 ^ xor_value_3)
+
+    def translate_ip_str_into_int(self, ip_str):
+        """
+        Translate the IP type from the string into the int.
+        """
+        ip_part_list = ip_str.split('.')
+        ip_part_list.reverse()
+        num = 0
+        ip_int = 0
+        for ip_part in ip_part_list:
+            ip_part_int = int(ip_part) << (num * 8)
+            ip_int += ip_part_int
+            num += 1
+        return ip_int
+
+    def ipv4_hash(self, dst_ip, src_ip):
+        """
+        Generate the hash value with the source and destination IP.
+        """
+        dst_ip_int = self.translate_ip_str_into_int(dst_ip)
+        src_ip_int = self.translate_ip_str_into_int(src_ip)
+        return htonl(dst_ip_int ^ src_ip_int)
+
+    def udp_hash(self, dst_port, src_port):
+        """
+        Generate the hash value with the source and destination port.
+        """
+        return htons(dst_port ^ src_port)
+
+    def policy_and_slave_hash(self, policy, **slaves):
+        """
+        Generate the hash value by the policy and active slave number.
+        : policy:'L2' , 'L23' or 'L34'
+        """
+        source = self.get_static_ip_configs()
+
+        dst_mac, dst_ip, dst_port = self.dst_pkt_configs
+
+        hash_values = []
+        if len(slaves['active']) == 0:
+            return hash_values
+
+        for src_mac, src_ip, src_port in source:
+            if policy == "L2":
+                hash_value = self.mac_hash(dst_mac, src_mac)
+            elif policy == "L23":
+                hash_value = self.mac_hash(dst_mac, src_mac) ^ \
+                             self.ipv4_hash(dst_ip, src_ip)
+            else:
+                hash_value = self.ipv4_hash(dst_ip, src_ip) ^ \
+                             self.udp_hash(dst_port, src_port)
+
+            if policy in ("L23", "L34"):
+                hash_value ^= hash_value >> 16
+            hash_value ^= hash_value >> 8
+            hash_value = hash_value % len(slaves['active'])
+            hash_values.append(hash_value)
+
+        return hash_values
+
+    def slave_map_hash(self, port, order_ports):
+        """
+        Find the hash value by the given slave port id.
+        """
+        if len(order_ports) == 0:
+            return None
+        else:
+            #order_ports = order_ports.split()
+            return order_ports.index(str(port))
+
+    def verify_active_backup_rx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify the RX packets are all correct in the active-backup mode.
+        Parameters:
+        """
+        return
+        pkt_count = 100
+        pkt_now = {}
+        
+        self.logger.info('verify_active_backup_rx')
+        slave_num = slaves['active'].__len__()
+        active_flag = 1 if slave_num != 0 else 0
+        pkt_now, summary = self.send_default_packet_to_slave(
+                                                         unbound_port,
+                                                         bond_port,
+                                                         pkt_count=pkt_count,
+                                                       **slaves)
+        results = []
+        try:
+            self.verify(pkt_now[bond_port][0] == pkt_count * slave_num,
+                        "Not correct RX pkt on bond port in mode 1")
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        try:
+            if unbound_port:
+                self.verify(
+                        pkt_now[unbound_port][0] == pkt_count * active_flag,
+                        "Not correct TX pkt on unbound port in mode 1")
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        try:
+            # check inactive slaves statistics
+            for slave in slaves['inactive']:
+                self.verify(pkt_now[slave][0] == 0,
+                            "Not correct RX pkt on inactive port in mode 1")
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        try:
+            # check active slaves statistics
+            for slave in slaves['active']:
+                self.verify(pkt_now[slave][0] == pkt_count, 
+                            "Not correct RX pkt on active port in mode 1")
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        if not results:
+            for item in results:
+                self.logger.error(e)
+            raise VerifyFailure("verify_active_backup_rx exception")
+
+        return pkt_now, summary
+
+    def verify_stacked_active_backup_rx(self, master_bonded, slave_bonded,
+                                        **slaves):
+        """
+        Verify the RX packets are all correct in the active-backup mode with
+        stacked bonded device. second level bonded device's statistics should
+        be the sum of slave bonded devices statistics.
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        self.logger.info('verify_stacked_active_backup_rx')
+        slave_num = slaves['active'].__len__()
+        active_flag = 1 if slave_num != 0 else 0
+        pkt_now, summary = self.send_default_packet_to_slave(
+                                                         None,
+                                                         slave_bonded,
+                                                         pkt_count=pkt_count,
+                                                       **slaves)
+        results = []
+        try:
+            self.verify(pkt_now[slave_bonded][0] == pkt_count * slave_num, 
+                    "Not correct RX pkt on bond port in mode 1")
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        try:
+            if master_bonded:
+                self.verify(
+                    pkt_now[master_bonded][0] == pkt_count * active_flag,
+                    "Not correct TX pkt on unbound port in mode 1")
+            pass
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        try:
+            # check inactive slaves statistics
+            for slave in slaves['inactive']:
+                self.verify(pkt_now[slave][0] == 0, 
+                            "Not correct RX pkt on inactive port in mode 1")
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        try:
+            # check active slaves statistics
+            for slave in slaves['active']:
+                self.verify(pkt_now[slave][0] == pkt_count, 
+                            "Not correct RX pkt on active port in mode 1")
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        if results:
+            for item in results:
+                self.logger.error(e)
+            raise VerifyFailure("verify_stacked_active_backup_rx exception")
+
+        return pkt_now, summary
+
+    def verify_xor_rx(self, unbound_port, bond_port, **slaves):
+        """
+        Verify receiving the pcakets correctly in the XOR mode.
+        Parameters:
+        : unbound_port: the unbonded port id
+        : bond_port: the bonded device port id
+        """
+        pkt_count = 100
+        pkt_now = {}
+
+        pkt_now, summary = self.send_default_packet_to_slave(
+                                                         unbound_port,
+                                                         bond_port,
+                                                         pkt_count=pkt_count,
+                                                         **slaves)
+
+        for slave in slaves['active']:
+            self.verify(pkt_now[slave][0] == pkt_count, 
+                        "Slave have error RX packet in XOR")
+        for slave in slaves['inactive']:
+            self.verify(pkt_now[slave][0] == 0, 
+                        "Slave have error RX packet in XOR")
+
+        if unbound_port:
+            self.verify(
+                pkt_now[unbound_port][0] == pkt_count * len(slaves['active']),
+                "Unbonded device have error TX packet in XOR")
+
+    def verify_xor_tx(self, unbound_port, bond_port, policy,
+                      vlan_tag=False, **slaves):
+        """
+        Verify that transmitting the packets correctly in the XOR mode.
+        Parameters:
+        : unbound_port: the unbonded port id
+        : bond_port: the bonded device port id
+        : policy:'L2' , 'L23' or 'L34'
+        : vlan_tag: False or True
+        """
+        pkt_count = 100
+        pkt_now = {}
+        try:
+            pkt_now, summary = self.send_customized_packet_to_unbound_port(
+                                                        unbound_port,
+                                                        bond_port,
+                                                        policy,
+                                                        vlan_tag=False,
+                                                        pkt_count=pkt_count,
+                                                        **slaves)
+        except Exception as e:
+            self.logger.error(traceback.format_exc())
+        finally:
+            pass
+
+        hash_values = self.policy_and_slave_hash(policy, **slaves)
+        order_ports = self.get_bonding_info(bond_port, 'active_slaves')
+        for slave in slaves['active']:
+            slave_map_hash = self.slave_map_hash(slave, order_ports)
+            self.verify(
+            pkt_now[slave][0] == pkt_count * hash_values.count(slave_map_hash),
+            "XOR load balance transmit error on the link up port")
+        for slave in slaves['inactive']:
+            self.verify(
+            pkt_now[slave][0] == 0,
+            "XOR load balance transmit error on the link down port")
+
+    @property
+    def driver(self):
+        return self.kdriver
+
+    #
+    # Test cases.
+    #
+    def set_up_all(self):
+        """
+        Run before each test suite
+        """
+        self.verify('bsdapp' not in self.target, 
+                     "Bonding not support freebsd")
+        self.dut_ports = self.dut.get_ports()
+
+        self.port_mask = utils.create_mask(self.dut_ports)
+
+        #self.verify(len(self.dut_ports) >= 4, "Insufficient ports")
+        self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+
+        self.all_cores_mask = utils.create_mask(self.dut.get_core_list("all"))
+
+        # separate ports into two group
+        sep_index = len(self.dut_ports)/2
+        self.slaveGrpOne = self.dut_ports[:sep_index]
+        self.slaveGrpTwo = self.dut_ports[sep_index:]
+        # testpmd initialization
+        self.testpmd = PmdOutput(self.dut)
+        self.testpmd_status = 'close'
+        self.tester_bond = "bond0"
+        self.dst_pkt_configs = []
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        pass
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        pass
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+
+    def backup_check_stacked_bonded_traffic(self):
+        '''
+        '''
+        self.logger.info("begin checking stacked bonded transmission")
+        self.start_testpmd()
+        slaveGrpOne = self.slaveGrpOne
+        slaveGrpTwo = self.slaveGrpTwo
+        bond_port_1, bond_port_2, bond_port_master = \
+                                                self.preset_stacked_bonded(
+                                                            slaveGrpOne,
+                                                            slaveGrpTwo,
+                                                            MODE_ACTIVE_BACKUP)
+        results = []
+        # check first bonded device
+        try:
+            self.logger.info('check first bonded device')
+            slaves = {}
+            slaves['active'] = slaveGrpOne
+            slaves['inactive'] = []
+            self.verify_stacked_active_backup_rx(None, bond_port_1, **slaves)
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        try:
+            # check second bonded device
+            self.logger.info('check second bonded device')
+            slaves = {}
+            slaves['active'] = slaveGrpTwo
+            slaves['inactive'] = []
+            self.verify_stacked_active_backup_rx(None, bond_port_2, **slaves)
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        pkt_now, summary = 0, 0
+        try:
+            self.logger.info('check master bonded device')
+            # check top bonded device
+            slaves = {}
+            slaves['active'] = slaveGrpOne + slaveGrpTwo
+            slaves['inactive'] = []
+            pkt_now, summary = self.verify_stacked_active_backup_rx(None,
+                                                    bond_port_master, **slaves)
+        except Exception as e:
+            results.append(e)
+        finally:
+            pass
+
+        self.close_testpmd()
+        if results:
+            for item in results:
+                self.logger.error(e)
+            msg = "backup_check_stacked_bonded_traffic exception"
+            raise VerifyFailure(msg)
+        else:
+            return pkt_now, summary
+
+    def backup_check_stacked_one_slave_down(self):
+        self.logger.info("begin checking backup stacked one slave down")
+        self.start_testpmd()
+        #-------------------------------
+        # boot up testpmd
+        mode = MODE_ACTIVE_BACKUP
+        slaveGrpOne = self.slaveGrpOne
+        slaveGrpTwo = self.slaveGrpTwo
+        bond_port_1, bond_port_2, bond_port_master = \
+                                        self.preset_stacked_bonded(slaveGrpOne,
+                                                                   slaveGrpTwo,
+                                                                   mode)
+        results = []
+        # check packet transmission when one slave of first level bonded device
+        #-------------------------------
+        pkt_now, summary = 0, 0
+        try:
+            # show one slave port of first level bonded device  
+            primary_slave = slaveGrpOne[0]
+            self.logger.info("down one slave port of bond_port_1")
+            self.config_tester_port_by_number(primary_slave, "down")
+            primary_port = self.get_bonding_info(bond_port_1, 'primary')
+            msg = "down status slave hasn't been remove from active slaves"
+            self.verify(primary_port != primary_slave, msg)
+            #------------------------------------------------------
+            slaves = {}
+            primary_port, active_slaves = self.get_active_slaves(primary_slave,
+                                                                 bond_port_1)
+            slaves['active'] = [primary_port]
+            slaves['active'].extend(active_slaves)
+            slaves['inactive'] = [self.dut_ports[primary_slave]]
+            pkt_now, summary = self.verify_stacked_active_backup_rx(
+                                                            None, 
+                                                            bond_port_master,
+                                                            **slaves)
+        except Exception as e:
+            results.append(e)
+        finally:
+            self.config_tester_port_by_number(primary_slave, "up")
+
+        self.close_testpmd()
+        if results:
+            for item in results:
+                self.logger.error(e)
+            raise VerifyFailure("backup_check_stacked_one_slave_down exception")
+
+    def check_stacked_xor_rx(self):
+        results = []
+        self.logger.info("begin checking xor stacked transmission")
+        self.start_testpmd()
+        #-------------------------------------
+        #-------------------------------
+        # boot up testpmd
+        try:
+            mode = MODE_XOR_BALANCE
+            slaveGrpOne = self.slaveGrpOne
+            slaveGrpTwo = self.slaveGrpTwo
+            bond_port_1, bond_port_2, bond_port_master = \
+                self.preset_stacked_bonded( slaveGrpOne, slaveGrpTwo, mode)
+            slaves = {}
+            slaves['active'] = slaveGrpOne + slaveGrpTwo
+            slaves['inactive'] = []
+
+            self.verify_xor_rx(None, bond_port_master, **slaves)
+        except Exception as e:
+            results.append(e)
+            self.logger.error(traceback.format_exc())
+        finally:
+            self.close_testpmd()
+        #-------------------------------------
+        if results:
+            for item in results:
+                self.logger.error(e)
+            raise VerifyFailure("check_stacked_xor_rx exception !")
+
+    def xor_check_stacked_rx_one_slave_down(self):
+        """
+        Verify that transmitting packets correctly in the XOR mode,
+        when bringing any one slave of the bonding device link down.
+        """
+        results = []
+        self.logger.info("begin xor_check_stacked_rx_one_slave_down")
+        #-------------------------------
+        # boot up testpmd
+        self.start_testpmd()
+        try:
+            #-------------------------------
+            mode = MODE_ACTIVE_BACKUP
+            slaveGrpOne = self.slaveGrpOne
+            #slaveGrpTwo = self.slaveGrpTwo[:-1]
+            slaveGrpTwo = self.slaveGrpTwo
+            bond_port_1, bond_port_2, bond_port_master = \
+                    self.preset_stacked_bonded( slaveGrpOne,
+                                                slaveGrpTwo,
+                                                mode)
+            primary_slave = slaveGrpOne[0]
+            self.logger.info("down one slave port of bond_port_1")
+            self.config_tester_port_by_number(primary_slave, "down")
+            primary_port = self.get_bonding_info(bond_port_1, 'primary')
+            msg = "down status slave hasn't been remove from active slaves"
+            self.verify(primary_port != primary_slave, msg)
+            #-------------------------
+            # add first bonded 
+            primary_port, active_slaves = self.get_active_slaves(primary_slave,
+                                                                 bond_port_1)
+            slaves = {}
+            slaves['active'] = [primary_port]
+            slaves['active'].extend(active_slaves)
+            # add second bonded 
+            primary_slave = slaveGrpOne[0]
+            primary_port_2, active_slaves_2 = self.get_active_slaves(
+                                                            primary_slave,
+                                                            bond_port_2)
+            slaves['active'] += [primary_port_2]
+            slaves['active'] += active_slaves_2
+            if primary_slave in slaves['active']:
+                slaves['active'].remove(primary_slave)
+                msg = "{0} should not be in active slaves list".format(
+                                                                primary_slave)
+                self.logger.warning()
+            
+            slaves['inactive'] = []#[primary_slave]
+#             self.verify_xor_tx(primary_slave,
+#                                bond_port_master,
+#                                "L2", False,
+#                                **slaves)
+            self.verify_xor_rx(None, bond_port_master, **slaves)
+        except Exception as e:
+            results.append(e)
+            self.logger.error(traceback.format_exc())
+        finally:
+            self.config_tester_port_by_number(primary_slave, "up")
+            self.close_testpmd()
+
+        if results:
+            for item in results:
+                self.logger.error(e)
+            raise VerifyFailure("xor_check_stacked_rx_one_slave_down failed")
+        else:
+            #return pkt_now, summary
+            pass
+
+    def test_basic_behav(self):
+        '''
+        allow a bonded device to be added to another bonded device.
+        There's two limitations to create master bonding:
+        
+         - Total depth of nesting is limited to two levels,
+         - 802.3ad mode is not supported if one or more slaves is a bond device
+        
+        note: There 802.3ad mode can not be supported on this bond device.
+        
+        This case is aimed at testing basic stacked bonded device commands.
+
+        '''
+        #------------------------------------------------
+        # check stacked bonded status, except mode 4 (802.3ad)
+        mode_list =[MODE_ROUND_ROBIN,
+                    MODE_ACTIVE_BACKUP,
+                    MODE_XOR_BALANCE,
+                    MODE_BROADCAST,
+                    MODE_TLB_BALANCE,
+                    MODE_ALB_BALANCE]
+        slaveGrpOne = self.slaveGrpOne
+        slaveGrpTwo = self.slaveGrpTwo
+        check_result = []
+        for bond_mode in mode_list:
+            self.logger.info("begin mode <{0}> checking".format(bond_mode))
+            # boot up testpmd
+            self.start_testpmd()
+            try:
+                self.logger.info("check bonding mode <{0}>".format(bond_mode))
+                # set up stacked bonded status
+                bond_port_1, bond_port_2, bond_port_master = \
+                    self.set_stacked_bonded(slaveGrpOne, slaveGrpTwo,
+                                            bond_mode)
+                # check duplicate add slave
+                self.duplicate_add_stacked_bonded(bond_port_1, bond_port_2,
+                                                  bond_port_master)
+                # check stacked limitation
+                self.set_third_stacked_bonded(bond_port_master, bond_mode)
+                # quit testpmd, it is not supported to reset testpmd
+                self.logger.info("mode <{0}> done !".format(bond_mode))
+                check_result.append([bond_mode, None])
+            except Exception as e:
+                check_result.append([bond_mode, e])
+                self.logger.error(e)
+            finally:
+                self.close_testpmd()
+                time.sleep(5)
+        #------------
+        # 802.3ad mode is not supported 
+        # if one or more slaves is a bond device
+        # so it should raise a exeption
+        msg = ''
+        try:
+            # boot up testpmd
+            self.start_testpmd()
+            # set up stacked bonded status
+            self.set_stacked_bonded(slaveGrpOne, slaveGrpTwo, MODE_LACP)
+            # quit testpmd, it is not supported to reset testpmd
+            msg = ("802.3ad mode hasn't been forbidden to "
+                   "use stacked bonded setting")
+            check_result.append([MODE_LACP, msg])
+        except Exception as e:
+            check_result.append([MODE_LACP, None])
+        finally:
+            self.close_testpmd()
+
+        exception_count = 0
+        for bond_mode, e in check_result:
+            msg = "mode <{0}>".format(bond_mode)
+            if e:
+                self.logger.info(msg)
+                self.logger.error(e)
+                exception_count += 1
+            else:
+                self.logger.info(msg + ' done !')
+        # if some checking item is failed, raise exception
+        if exception_count:
+            raise VerifyFailure('some test items failed')
+        else:
+            self.logger.info('all test items have done !')
+        self.close_testpmd()
+
+    def test_mode_backup_rx(self):
+        """
+        Verify receiving and transmitting the packets correctly
+        in the active-backup mode.
+        """
+        stacked_stats = self.backup_check_stacked_bonded_traffic()
+
+    def test_mode_backup_one_slave_down(self):
+        """
+        Verify that receiving and transmitting the pcakets correctly 
+        in the active-backup mode, when bringing any one slave of 
+        the bonding device link down.
+        """
+        if len(self.dut_ports) >= 4:
+            self.backup_check_stacked_one_slave_down()
+        else:
+            msg = "ports less than 2, ignore stacked one slave down check"
+            self.logger.warning(msg)
+
+    def test_mode_xor_rx(self):
+        """
+        Verify that receiving packets correctly in the XOR mode.
+        """
+        self.check_stacked_xor_rx()
+
+    def test_mode_xor_rx_one_slave_down(self):
+        """
+        Verify that transmitting packets correctly in the XOR mode,
+        when bringing any one slave of the bonding device link down.
+        """
+        if len(self.dut_ports) >= 4:
+            self.xor_check_stacked_rx_one_slave_down()
+        else:
+            msg = "ports less than 2, ignore stacked one slave down check"
+            self.logger.warning(msg)
+
-- 
1.9.3

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1 2/2] pmd_stacked_bonded: upload automation script
  2018-06-06  5:37 ` [dts] [PATCH V1 2/2] pmd_stacked_bonded: upload automation script yufengx.mo
@ 2019-01-21  7:17   ` Chen, Zhaoyan
  0 siblings, 0 replies; 4+ messages in thread
From: Chen, Zhaoyan @ 2019-01-21  7:17 UTC (permalink / raw)
  To: dts; +Cc: Mo, YufengX, Tu, Lijuan, Chen, Zhaoyan

Yufeng,

- Could you move the hard code ip/mac/port address to constants? and put them at the begin of the code.
- Extract the common functions which used by bond test into common file and share between each bond test suite.


Regards,
Zhaoyan Chen


> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of yufengx.mo@intel.com
> Sent: Wednesday, June 6, 2018 1:38 PM
> To: dts@dpdk.org
> Cc: Mo, YufengX <yufengx.mo@intel.com>
> Subject: [dts] [PATCH V1 2/2] pmd_stacked_bonded: upload automation script
> 
> From: yufengmx <yufengx.mo@intel.com>
> 
> 
> This automation script is for pmd stacked bonded feature.
> 
> Allow bonded devices to be stacked to allow two or more bonded devices to be
> bonded into one master bonded device
> 
> Signed-off-by: yufengmx <yufengx.mo@intel.com>
> ---
>  tests/TestSuite_pmd_stacked_bonded.py | 1593
> +++++++++++++++++++++++++++++++++
>  1 file changed, 1593 insertions(+)
>  create mode 100644 tests/TestSuite_pmd_stacked_bonded.py
> 
> diff --git a/tests/TestSuite_pmd_stacked_bonded.py
> b/tests/TestSuite_pmd_stacked_bonded.py
> new file mode 100644
> index 0000000..4596c58
> --- /dev/null
> +++ b/tests/TestSuite_pmd_stacked_bonded.py
> @@ -0,0 +1,1593 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2010-2018 Intel Corporation. All rights reserved.
> +# All rights reserved.
> +#
> +# Redistribution and use in source and binary forms, with or without
> +# modification, are permitted provided that the following conditions
> +# are met:
> +#
> +#   * Redistributions of source code must retain the above copyright
> +#     notice, this list of conditions and the following disclaimer.
> +#   * Redistributions in binary form must reproduce the above copyright
> +#     notice, this list of conditions and the following disclaimer in
> +#     the documentation and/or other materials provided with the
> +#     distribution.
> +#   * Neither the name of Intel Corporation nor the names of its
> +#     contributors may be used to endorse or promote products derived
> +#     from this software without specific prior written permission.
> +#
> +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> FOR
> +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
> ANY
> +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
> USE
> +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +import traceback
> +import os
> +import time
> +import re
> +import random
> +from socket import htons, htonl
> +
> +import utils
> +from test_case import TestCase
> +from exception import TimeoutException, VerifyFailure
> +from settings import TIMEOUT
> +from pmd_output import PmdOutput
> +
> +SOCKET_0 = 0
> +SOCKET_1 = 1
> +
> +MODE_ROUND_ROBIN = 0
> +MODE_ACTIVE_BACKUP = 1
> +MODE_XOR_BALANCE = 2
> +MODE_BROADCAST = 3
> +MODE_LACP = 4
> +MODE_TLB_BALANCE = 5
> +MODE_ALB_BALANCE = 6
> +
> +FRAME_SIZE_64 = 64
> +
> +class TestBondingStacked(TestCase):
> +
> +    def get_static_ip_configs(self):
> +        S_MAC_IP_PORT = [('52:00:00:00:00:00', '10.239.129.65', 61),
> +                         ('52:00:00:00:00:01', '10.239.129.66', 62),
> +                         ('52:00:00:00:00:02', '10.239.129.67', 63)]
> +        return S_MAC_IP_PORT
> +    #
> +    # On tester platform, packet transmission
> +    #
> +    def get_stats(self, portid, flow):
> +        """
> +        get testpmd port statistic
> +        """
> +        _portid = int(portid) if isinstance(portid, (str, unicode)) else portid
> +        info = self.testpmd.get_pmd_stats(_portid)
> +        _kwd = ["-packets", "-missed", "-bytes"]
> +        kwd = map(lambda x: flow.upper() + x, _kwd)
> +        result = [int(info[item]) for item in kwd]
> +
> +        return result
> +
> +    def get_all_stats(self, unbound_port, rx_tx, bond_port, **slaves):
> +        """
> +        Get all the port stats which the testpmd can display.
> +        Parameters:
> +        : unbound_port: pmd port id
> +        : rx_tx: unbond port stat 'rx' or 'tx'
> +        : bond_port: bonding port
> +        """
> +        pkt_now = {}
> +
> +        if rx_tx == 'rx':
> +            bond_stat = 'tx'
> +        else:
> +            bond_stat = 'rx'
> +
> +        if unbound_port: # if unbound_port has not been set, ignore this
> +            pkt_now[unbound_port] = [int(_) for _ in self.get_stats(
> +                                                            unbound_port,
> +                                                            rx_tx)]
> +
> +        pkt_now[bond_port] = [int(_) for _ in self.get_stats(bond_port,
> +                                                             bond_stat)]
> +        for slave in slaves['active']:
> +            pkt_now[slave] = [int(_) for _ in self.get_stats(slave, bond_stat)]
> +        for slave in slaves['inactive']:
> +            pkt_now[slave] = [int(_) for _ in self.get_stats(slave, bond_stat)]
> +
> +        return pkt_now
> +
> +    def parse_ether_ip(self, dst_port, **ether_ip):
> +        """
> +        ether_ip:
> +            'ether':
> +                   'dst_mac':False
> +                   'src_mac':"52:00:00:00:00:00"
> +            'dot1q': 'vlan':1
> +            'ip':  'dst_ip':"10.239.129.88"
> +                   'src_ip':"10.239.129.65"
> +            'udp': 'dst_port':53
> +                   'src_port':53
> +        """
> +        ret_ether_ip = {}
> +        ether = {}
> +        dot1q = {}
> +        ip = {}
> +        udp = {}
> +
> +        try:
> +            dut_dst_port = self.dut_ports[dst_port]
> +        except Exception, e:
> +            dut_dst_port = dst_port
> +
> +        if not ether_ip.get('ether'):
> +            ether['dst_mac'] = self.dut.get_mac_address(dut_dst_port)
> +            ether['src_mac'] = "52:00:00:00:00:00"
> +        else:
> +            if not ether_ip['ether'].get('dst_mac'):
> +                ether['dst_mac'] = self.dut.get_mac_address(dut_dst_port)
> +            else:
> +                ether['dst_mac'] = ether_ip['ether']['dst_mac']
> +            if not ether_ip['ether'].get('src_mac'):
> +                ether['src_mac'] = "52:00:00:00:00:00"
> +            else:
> +                ether['src_mac'] = ether_ip["ether"]["src_mac"]
> +
> +        if not ether_ip.get('dot1q'):
> +            pass
> +        else:
> +            if not ether_ip['dot1q'].get('vlan'):
> +                dot1q['vlan'] = '1'
> +            else:
> +                dot1q['vlan'] = ether_ip['dot1q']['vlan']
> +
> +        if not ether_ip.get('ip'):
> +            ip['dst_ip'] = "10.239.129.88"
> +            ip['src_ip'] = "10.239.129.65"

> +        else:
> +            if not ether_ip['ip'].get('dst_ip'):
> +                ip['dst_ip'] = "10.239.129.88"
> +            else:
> +                ip['dst_ip'] = ether_ip['ip']['dst_ip']
> +            if not ether_ip['ip'].get('src_ip'):
> +                ip['src_ip'] = "10.239.129.65"
> +            else:
> +                ip['src_ip'] = ether_ip['ip']['src_ip']
> +
> +        if not ether_ip.get('udp'):
> +            udp['dst_port'] = 53
> +            udp['src_port'] = 53
> +        else:
> +            if not ether_ip['udp'].get('dst_port'):
> +                udp['dst_port'] = 53
> +            else:
> +                udp['dst_port'] = ether_ip['udp']['dst_port']
> +            if not ether_ip['udp'].get('src_port'):
> +                udp['src_port'] = 53
> +            else:
> +                udp['src_port'] = ether_ip['udp']['src_port']
> +
> +        ret_ether_ip['ether'] = ether
> +        ret_ether_ip['dot1q'] = dot1q
> +        ret_ether_ip['ip'] = ip
> +        ret_ether_ip['udp'] = udp
> +
> +        return ret_ether_ip
> +
> +    def config_tester_port(self, port_name, status):
> +        """
> +        Do some operations to the network interface port,
> +        such as "up" or "down".
> +        """
> +        if self.tester.get_os_type() == 'freebsd':
> +            self.tester.admin_ports(port_name, status)
> +        else:
> +            eth = self.tester.get_interface(port_name)
> +            self.tester.admin_ports_linux(eth, status)
> +        time.sleep(5)
> +
> +    def config_tester_port_by_number(self, number, status):
> +        # stop slave link by force
> +        cmds = [["port stop %d"%number, '']]
> +        self.execute_testpmd_cmd(cmds)
> +        # stop peer port on tester
> +        port_name = self.tester.get_local_port(self.dut_ports[number])
> +        self.config_tester_port( port_name, status)
> +        time.sleep(5)
> +        cur_status = self.get_port_info(number, 'link_status')
> +        self.logger.info("port {0} is [{1}]".format(number, cur_status))
> +        if cur_status != status:
> +            self.logger.warning("expected status is [{0}]".format(status))
> +
> +    def send_packet(self, dst_port, src_port=False, frame_size=FRAME_SIZE_64,
> +                    count=1, invert_verify=False, **ether_ip):
> +        """
> +        Send count packet to portid
> +        count: 1 or 2 or 3 or ... or 'MANY'
> +               if count is 'MANY', then set count=1000,
> +               send packets during 5 seconds.
> +        ether_ip:
> +            'ether': 'dst_mac':False
> +                     'src_mac':"52:00:00:00:00:00"
> +            'dot1q': 'vlan':1
> +            'ip':   'dst_ip':"10.239.129.88"
> +                    'src_ip':"10.239.129.65"
> +            'udp':  'dst_port':53
> +                    'src_port':53
> +        """
> +        during = 0
> +        loop = 0
> +
> +        try:
> +            count = int(count)
> +        except ValueError as e:
> +            if count == 'MANY':
> +                during = 5
> +                count = 1000
> +            else:
> +                raise e
> +
> +        if not src_port:
> +            gp0rx_pkts, gp0rx_err, gp0rx_bytes = \
> +                [int(_) for _ in self.get_stats(self.dut_ports[dst_port],
> +                                                "rx")]
> +            itf = self.tester.get_interface(
> +                        self.tester.get_local_port(self.dut_ports[dst_port]))
> +            os.system("ifconfig {0} up".format(itf))
> +            # temp = os.system("ifconfig {0} up".format(itf))
> +        else:
> +            gp0rx_pkts, gp0rx_err, gp0rx_bytes = \
> +                [int(_) for _ in self.get_stats(dst_port, "rx")]
> +            itf = src_port
> +
> +        time.sleep(2)
> +        ret_ether_ip = self.parse_ether_ip(dst_port, **ether_ip)
> +
> +        pktlen = frame_size - 18
> +        padding = pktlen - 20
> +
> +        start = time.time()
> +        while True:
> +            self.tester.scapy_foreground()
> +            append = self.tester.scapy_append
> +            append('nutmac="%s"' % ret_ether_ip['ether']['dst_mac'])
> +            append('srcmac="%s"' % ret_ether_ip['ether']['src_mac'])
> +
> +            if ether_ip.get('dot1q'):
> +                append('vlanvalue=%d' % ret_ether_ip['dot1q']['vlan'])
> +            append('destip="%s"' % ret_ether_ip['ip']['dst_ip'])
> +            append('srcip="%s"' % ret_ether_ip['ip']['src_ip'])
> +            append('destport=%d' % ret_ether_ip['udp']['dst_port'])
> +            append('srcport=%d' % ret_ether_ip['udp']['src_port'])
> +            if not ret_ether_ip.get('dot1q'):
> +                packet = "/".join(["Ether(dst=nutmac, src=srcmac)",
> +                                  "IP(dst=destip, src=srcip, len=%s)",
> +                                  "UDP(sport=srcport, dport=destport)",
> +                                  "Raw(load='\x50'*%s)"])
> +                cmd = 'sendp([{0}], iface="%s", count=%d)'.format(packet)
> +                append(cmd % (pktlen, padding, itf, count))
> +            else:
> +                packet = "/".join(["Ether(dst=nutmac, src=srcmac)",
> +                                   "Dot1Q(vlan=vlanvalue)",
> +                                  "IP(dst=destip, src=srcip, len=%s)",
> +                                  "UDP(sport=srcport, dport=destport)",
> +                                  "Raw(load='\x50'*%s)"])
> +                cmd = 'sendp([{0}], iface="%s", count=%d)'.format(packet)
> +                append(cmd % (pktlen, padding, itf, count))
> +
> +            self.tester.scapy_execute()
> +            loop += 1
> +
> +            now = time.time()
> +            if (now - start) >= during:
> +                break
> +        time.sleep(.5)
> +
> +        if not src_port:
> +            p0rx_pkts, p0rx_err, p0rx_bytes = \
> +               [int(_) for _ in self.get_stats(self.dut_ports[dst_port], "rx")]
> +        else:
> +            p0rx_pkts, p0rx_err, p0rx_bytes = \
> +                [int(_) for _ in self.get_stats(dst_port, "rx")]
> +
> +        p0rx_pkts -= gp0rx_pkts
> +        p0rx_bytes -= gp0rx_bytes
> +
> +        if invert_verify:
> +            LACP_MESSAGE_SIZE = 128
> +            msg = ("port <{0}> Data received by port <{1}>, "
> +                   "but should not.").format(itf, dst_port)
> +            self.verify(p0rx_pkts == 0 or
> +                        p0rx_bytes / p0rx_pkts == LACP_MESSAGE_SIZE,
> +                        msg)
> +            msg = "port {0}  <-|  |-> port {1} is ok".format(itf,
> +                                                            dst_port)
> +            self.logger.info(msg)
> +        else:
> +            msg = "port <{0}> Data not received by port <{1}>".format(itf,
> +                                                                   dst_port)
> +            self.verify(p0rx_pkts >= count * loop,
> +                        msg)
> +            msg = "port {0}  <----> port {1} is ok".format( itf, dst_port)
> +            self.logger.info(msg)
> +        return count * loop
> +
> +    def send_default_packet_to_slave(self, unbound_port, bond_port,
> +                                     pkt_count=100, **slaves):
> +        """
> +        Send packets to the slaves and calculate the slave`s RX packets
> +        and unbond port TX packets.
> +        Parameters:
> +        : unbound_port: the unbonded port id
> +        : bond_port: the bonded device port id
> +        """
> +        pkt_orig = {}
> +        pkt_now = {}
> +        temp_count = 0
> +        summary = 0
> +        results = []
> +        #---------------------------
> +        # send to slave ports
> +        pkt_orig = self.get_all_stats(unbound_port, 'tx', bond_port, **slaves)
> +        self.logger.info("send packet to active slave ports")
> +        for slave in slaves['active']:
> +            try:
> +                temp_count = self.send_packet(self.dut_ports[int(slave)],
> +                                              False,
> +                                              FRAME_SIZE_64, pkt_count)
> +                summary += temp_count
> +            except Exception as e:
> +                results.append(e)
> +            finally:
> +                pass
> +        #---------------------------
> +        if slaves['inactive'] and False:
> +            self.logger.info("send packet to inactive slave ports")
> +            for slave in slaves['inactive']:
> +                try:
> +                    self.send_packet(self.dut_ports[int(slave)], False,
> +                                     FRAME_SIZE_64, pkt_count, True)
> +                except Exception as e:
> +                    results.append(e)
> +                finally:
> +                    pass
> +
> +        if results:
> +            for item in results:
> +                self.logger.error(e)
> +            raise VerifyFailure("send_default_packet_to_slave failed")
> +
> +        pkt_now = self.get_all_stats(unbound_port, 'tx', bond_port, **slaves)
> +
> +        for key in pkt_now:
> +            for num in [0, 1, 2]:
> +                pkt_now[key][num] -= pkt_orig[key][num]
> +
> +        return pkt_now, summary
> +
> +    def send_default_packet_to_unbound_port(self, unbound_port, bond_port,
> +                                            pkt_count=300, **slaves):
> +        """
> +        Send packets to the unbound port and calculate unbound port RX packets
> +        and the slave`s TX packets.
> +        Parameters:
> +        : unbound_port: the unbonded port id
> +        : bond_port: the bonded device port id
> +        """
> +        pkt_orig = {}
> +        pkt_now = {}
> +        summary = 0
> +
> +        # send to unbonded device
> +        pkt_orig = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
> +        summary = self.send_packet(unbound_port, False,
> +                                   FRAME_SIZE_64, pkt_count)
> +        pkt_now = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
> +
> +        for key in pkt_now:
> +            for num in [0, 1, 2]:
> +                pkt_now[key][num] -= pkt_orig[key][num]
> +
> +        return pkt_now, summary
> +
> +    def send_customized_packet_to_unbound_port(self, unbound_port, bond_port,
> +                                               policy, vlan_tag=False,
> +                                               pkt_count=100, **slaves):
> +        """
> +        Verify that transmitting the packets correctly in the XOR mode.
> +        Parameters:
> +        : unbound_port: the unbonded port id
> +        : bond_port: the bonded device port id
> +        : policy: 'L2' , 'L23' or 'L34'
> +        : vlan_tag: False or True
> +        """
> +        pkt_orig = {}
> +        pkt_now = {}
> +        summary = 0
> +        temp_count = 0
> +
> +        # send to unbound_port
> +        pkt_orig = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
> +        dest_mac = self.dut.get_mac_address(self.dut_ports[unbound_port])
> +        dest_ip = "10.239.129.88"
> +        dest_port = 53
> +
> +        self.dst_pkt_configs = [dest_mac, dest_ip, dest_port]
> +
> +        ether_ip = {}
> +        ether = {}
> +        ip = {}
> +        udp = {}
> +
> +        ether['dest_mac'] = False
> +        ip['dest_ip'] = dest_ip
> +        udp['dest_port'] = 53
> +        if vlan_tag:
> +            dot1q = {}
> +            dot1q['vlan'] = random.randint(1, 50)
> +            ether_ip['dot1q'] = dot1q
> +
> +        ether_ip['ether'] = ether
> +        ether_ip['ip'] = ip
> +        ether_ip['udp'] = udp
> +
> +        source = self.get_static_ip_configs()
> +
> +        for src_mac, src_ip, src_port in source:
> +            ether_ip['ether']['src_mac'] = src_mac
> +            ether_ip['ip']['src_ip'] = src_ip
> +            ether_ip['udp']['src_port'] = src_port
> +            temp_count = self.send_packet(unbound_port, False, FRAME_SIZE_64,
> +                                          pkt_count, False, **ether_ip)
> +            summary += temp_count
> +        pkt_now = self.get_all_stats(unbound_port, 'rx', bond_port, **slaves)
> +
> +        for key in pkt_now:
> +            for num in [0, 1, 2]:
> +                pkt_now[key][num] -= pkt_orig[key][num]
> +
> +        return pkt_now, summary
> +
> +    #
> +    # On dut, dpdk testpmd
> +    #
> +    def preset_testpmd(self, core_mask, options=''):
> +        self.testpmd.start_testpmd( core_mask, param=' '.join(options))
> +        self.execute_testpmd_cmd(self.preset_testpmd_cmds)
> +        self.preset_testpmd_cmds = list()
> +        time.sleep(1)
> +
> +    def execute_testpmd_cmd(self, cmds):
> +        if len(cmds) == 0:
> +            return
> +        for item in cmds:
> +            expected_str = item[1] or 'testpmd> '
> +            if len(item) == 3:
> +                self.testpmd.execute_cmd(item[0], item[1], int(item[2]))
> +            else:
> +                self.testpmd.execute_cmd(item[0], item[1])
> +        time.sleep(2)
> +
> +    def start_testpmd(self, eal_option=None):
> +        if self.testpmd_status == 'running':
> +            return
> +        # link eal option and testpmd options
> +        offloadd = '0x1fbf' if self.driver == 'i40e' else '0x2203f'
> +        options = ["--tx-offloads={0}".format(offloadd)]
> +#         options = " ".join([eal_option, options]) if eal_option else ''
> +        # boot up testpmd
> +        hw_mask = '1S/2C/1T'
> +        self.preset_testpmd_cmds = [[' ', ''], # used to resolve lsc event
> +                                    ['port stop all', '']]
> +        self.preset_testpmd(hw_mask, options)
> +        self.testpmd_status = 'running'
> +
> +    def stop_testpmd(self):
> +        time.sleep(1)
> +        testpmd_cmds =[['port stop all', ''],
> +                       ['stop', ''],]
> +        self.execute_testpmd_cmd(testpmd_cmds)
> +        time.sleep(1)
> +
> +    def close_testpmd(self):
> +        if self.testpmd_status == 'close':
> +            return
> +        self.stop_testpmd()
> +        time.sleep(1)
> +        self.testpmd.quit()
> +        self.testpmd_status = 'close'
> +    #
> +    # On dut, dpdk bonding
> +    #
> +    def get_value_from_str(self, key_str, regx_str, string):
> +        """
> +        Get some values from the given string by the regular expression.
> +        """
> +        if isinstance(key_str, (unicode, str)):
> +            pattern = r"(?<=%s)%s" % (key_str, regx_str)
> +            s = re.compile(pattern)
> +            res = s.search(string)
> +            if type(res).__name__ == 'NoneType':
> +                msg = "{0} hasn't match anything".format(key_str)
> +                self.logger.warning(msg)
> +                return ' '
> +            else:
> +                return res.group(0)
> +        elif isinstance(key_str, (list, tuple)):
> +            for key in key_str:
> +                pattern = r"(?<=%s)%s" % (key, regx_str)
> +                s = re.compile(pattern)
> +                res = s.search(string)
> +                if type(res).__name__ != 'NoneType':
> +                    return res.group(0)
> +            else:
> +                self.logger.warning("all key_str hasn't match anything")
> +                return ' '
> +
> +    def _get_detail_from_port_info(self, port_id, args):
> +        """
> +        Get the detail info from the output of pmd cmd
> +            'show port info <port num>'.
> +        """
> +        port = port_id
> +        key_str, regx_str = args
> +        out = self.dut.send_expect("show port info %d" % port, "testpmd> ")
> +        find_value = self.get_value_from_str(key_str, regx_str, out)
> +        return find_value
> +
> +    def get_detail_from_port_info(self, port_id, args):
> +        if isinstance(args[0], (list, tuple)):
> +            return [self._get_detail_from_port_info(port_id, sub_args)
> +                        for sub_args in args]
> +        else:
> +            return self._get_detail_from_port_info(port_id, args)
> +
> +    def get_port_info(self, port_id, info_type):
> +        '''
> +        Get the specified port information by its output message format
> +        '''
> +        info_set = {
> +        'mac':              ["MAC address: ", "([0-9A-F]{2}:){5}[0-9A-F]{2}"],
> +        'connect_socket':   ["Connect to socket: ", "\d+"],
> +        'memory_socket':    ["memory allocation on the socket: ", "\d+"],
> +        'link_status':      ["Link status: ", "\S+"],
> +        'link_speed':       ["Link speed: ", "\d+"],
> +        'link_duplex':      ["Link duplex: ", "\S+"],
> +        'promiscuous_mode': ["Promiscuous mode: ", "\S+"],
> +        'allmulticast_mode':["Allmulticast mode: ", "\S+"],
> +        'vlan_offload':     [
> +                        ["strip ", "\S+"],
> +                        ['filter', "\S+"],
> +                        ['qinq\(extend\) ', "\S+"]],
> +        'queue_config':     [
> +                        ["Max possible RX queues: ", "\d+"],
> +                        ['Max possible number of RXDs per queue: ', "\d+"],
> +                        ['Min possible number of RXDs per queue: ', "\d+"],
> +                        ["Max possible TX queues: ", "\d+"],
> +                        ['Max possible number of TXDs per queue: ', "\d+"],
> +                        ['Min possible number of TXDs per queue: ', "\d+"],]
> +        }
> +
> +        if info_type in info_set.keys():
> +            return self.get_detail_from_port_info(port_id, info_set[info_type])
> +        else:
> +            return None
> +
> +    def get_bonding_config(self, config_content, args):
> +        """
> +        Get info by executing the command "show bonding config".
> +        """
> +        key_str, regx_str = args
> +        find_value = self.get_value_from_str(key_str, regx_str, config_content)
> +        return find_value
> +
> +    def get_info_from_bond_config(self, config_content, args):
> +        """
> +        Get the active slaves of the bonding device which you choose.
> +        """
> +        search_args = args if isinstance(args[0], (list, tuple)) else [args]
> +        for search_args in search_args:
> +            try:
> +                info = self.get_bonding_config(config_content, search_args)
> +                break
> +            except Exception as e:
> +                self.logger.info(e)
> +            finally:
> +                pass
> +        else:
> +            info = None
> +
> +        return info
> +
> +    def get_bonding_info(self, bond_port, info_types):
> +        '''
> +        Get the specified port information by its output message format
> +        '''
> +        info_set = {
> +            'mode':          ["Bonding mode: ", "\d*"],
> +            'balance_policy':["Balance Xmit Policy: ", "\S+"],
> +            'slaves':        [["Slaves \(\d\): \[", "\d*( \d*)*"],
> +                              ["Slaves: \[", "\d*( \d*)*"]],
> +            'active_slaves': [["Active Slaves \(\d\): \[", "\d*( \d*)*"],
> +                              ["Acitve Slaves: \[", "\d*( \d*)*"]],
> +            'primary':       ["Primary: \[", "\d*"]}
> +        # get all config information
> +        config_content = self.dut.send_expect(
> +                                    "show bonding config %d" % bond_port,
> +                                    "testpmd> ")
> +        if isinstance(info_types, (list or tuple)):
> +            query_values = []
> +            for info_type in info_types:
> +                if info_type in info_set.keys():
> +                    find_value = self.get_info_from_bond_config(
> +                                                        config_content,
> +                                                        info_set[info_type])
> +                    if info_type in ['active_slaves', 'slaves']:
> +                        find_value = [value for value in find_value.split(' ')
> +                                            if value]
> +                else:
> +                    find_value = None
> +                query_values.append(find_value)
> +            return query_values
> +        else:
> +            info_type = info_types
> +            if info_type in info_set.keys():
> +                find_value = self.get_info_from_bond_config(
> +                                                        config_content,
> +                                                        info_set[info_type])
> +                if info_type in ['active_slaves', 'slaves']:
> +                    find_value = [value for value in find_value.split(' ')
> +                                        if value]
> +                return find_value
> +            else:
> +                return None
> +
> +    def get_active_slaves(self, primary_slave, bond_port):
> +        self.config_tester_port_by_number(primary_slave, "down")
> +        primary_port = self.get_bonding_info(bond_port, 'primary')
> +        active_slaves = self.get_bonding_info(bond_port, 'active_slaves')
> +        if active_slaves and primary_port in active_slaves:
> +            active_slaves.remove(primary_port)
> +        else:
> +            msg = "primary port <{0}> isn't in active slaves list".format(
> +                                                                primary_port)
> +            self.logger.error
> +            raise VerifyFailure(msg)
> +
> +        return primary_port, active_slaves
> +
> +    def create_bonded_device(self, mode=0, socket=0, verify_detail=False):
> +        """
> +        Create a bonding device with the parameters you specified.
> +        """
> +        cmd = "create bonded device %d %d" % (mode, socket)
> +        out = self.dut.send_expect(cmd, "testpmd> ")
> +        self.verify("Created new bonded device" in out,
> +                    "Create bonded device on mode [%d] socket [%d] failed" % (
> +                                                                    mode,
> +                                                                    socket))
> +        bond_port = self.get_value_from_str([
> +            "Created new bonded device net_bond_testpmd_[\d] on \(port ",
> +            "Created new bonded device net_bonding_testpmd_[\d] on \(port "],
> +            "\d+", out)
> +        bond_port = int(bond_port)
> +
> +        if verify_detail:
> +            out = self.dut.send_expect("show bonding config %d" % bond_port,
> +                                       "testpmd> ")
> +            self.verify("Bonding mode: %d" % mode in out,
> +                        "Bonding mode display error when create bonded device")
> +            self.verify("Slaves: []" in out,
> +                        "Slaves display error when create bonded device")
> +            self.verify("Active Slaves: []" in out,
> +                    "Active Slaves display error when create bonded device")
> +            self.verify("Primary: []" not in out,
> +                        "Primary display error when create bonded device")
> +
> +            out = self.dut.send_expect("show port info %d" % bond_port,
> +                                       "testpmd> ")
> +            self.verify("Connect to socket: %d" % socket in out,
> +                        "Bonding port connect socket error")
> +            self.verify("Link status: down" in out,
> +                        "Bonding port default link status error")
> +            self.verify("Link speed: 0 Mbps" in out,
> +                        "Bonding port default link speed error")
> +
> +        return bond_port
> +
> +    def start_ports(self, port='all'):
> +        """
> +        Start a port which the testpmd can see.
> +        """
> +        timeout = 12 if port=='all' else 5
> +        # to avoid lsc event message interfere normal status
> +        #self.dut.send_expect("port start %s" % str(port), " ", timeout)
> +        self.dut.send_expect("port start %s" % str(port), " ", timeout)
> +        self.dut.send_expect(" ", "testpmd> ", timeout)
> +
> +    def add_slave(self, bond_port, invert_verify=False,
> +                  expected_str='', *slave_ports):
> +        """
> +        Add the ports into the bonding device as slaves.
> +        """
> +        if len(slave_ports) <= 0:
> +            utils.RED("No port exist when add slave to bonded device")
> +        for slave_id in slave_ports:
> +            out = self.dut.send_expect(
> +                        "add bonding slave %d %d" % (slave_id, bond_port),
> +                        "testpmd> ")
> +            if expected_str:
> +                self.verify(expected_str in out,
> +                            "message <{0}> is missiong".format(expected_str))
> +            slaves = self.get_bonding_info(bond_port, 'slaves')
> +            if not invert_verify:
> +                self.verify(str(slave_id) in slaves,
> +                            "Add port as bonding slave failed")
> +            else:
> +                self.verify(str(slave_id) not in slaves,
> +                        "Add port as bonding slave successfully,should fail")
> +
> +    def remove_slaves(self, bond_port, invert_verify=False, *slave_port):
> +        """
> +        Remove the specified slave port from the bonding device.
> +        """
> +        if len(slave_port) <= 0:
> +            utils.RED("No port exist when remove slave from bonded device")
> +        for slave_id in slave_port:
> +            cmd = "remove bonding slave %d %d" % (int(slave_id), bond_port)
> +            self.dut.send_expect(cmd, "testpmd> ")
> +            slaves = self.get_bonding_info(bond_port, 'slaves')
> +            if not invert_verify:
> +                self.verify(str(slave_id) not in slaves,
> +                            "Remove slave to fail from bonding device")
> +            else:
> +                self.verify(str(slave_id) in slaves,
> +                            ("Remove slave successfully from "
> +                             "bonding device,should be failed"))
> +
> +    def remove_all_slaves(self, bond_port):
> +        """
> +        Remove all slaves of specified bound device.
> +        """
> +        all_slaves = self.get_bonding_info(bond_port, 'slaves')
> +        all_slaves = all_slaves.split()
> +        if len(all_slaves) == 0:
> +            pass
> +        else:
> +            self.remove_slaves(bond_port, False, *all_slaves)
> +
> +    def set_primary_slave(self, bond_port, slave_port, invert_verify=False):
> +        """
> +        Set the primary slave for the bonding device.
> +        """
> +        cmd = "set bonding primary %d %d" % (slave_port, bond_port)
> +        self.dut.send_expect(cmd, "testpmd> ")
> +        out = self.get_bonding_info(bond_port, 'primary')
> +        if not invert_verify:
> +            self.verify(str(slave_port) in out,
> +                        "Set bonding primary port failed")
> +        else:
> +            self.verify(str(slave_port) not in out,
> +                        ("Set bonding primary port successfully, "
> +                         "should not success"))
> +
> +    def set_bonding_mode(self, bond_port, mode):
> +        """
> +        Set the mode for the bonding device.
> +        """
> +        cmd = "set bonding mode %d %d" % (mode, bond_port)
> +        self.dut.send_expect(cmd, "testpmd> ")
> +        mode_value = self.get_bonding_info(bond_port, 'mode')
> +        self.verify(str(mode) in mode_value, "Set bonding mode failed")
> +
> +    def set_bonding_mac(self, bond_port, mac):
> +        """
> +        Set the MAC for the bonding device.
> +        """
> +        cmd = "set bonding mac_addr %s %s" % (bond_port, mac)
> +        self.dut.send_expect(cmd, "testpmd> ")
> +        new_mac = self.get_port_mac(bond_port)
> +        self.verify(new_mac == mac, "Set bonding mac failed")
> +
> +    def set_bonding_balance_policy(self, bond_port, policy):
> +        """
> +        Set the balance transmit policy for the bonding device.
> +        """
> +        cmd = "set bonding balance_xmit_policy %d %s" % (bond_port, policy)
> +        self.dut.send_expect(cmd, "testpmd> ")
> +        new_policy = self.get_bonding_info(bond_port, 'balance_policy')
> +        policy = "BALANCE_XMIT_POLICY_LAYER" + policy.lstrip('l')
> +        self.verify(new_policy == policy, "Set bonding balance policy failed")
> +
> +    def check_stacked_device_queue_config(self, *devices):
> +        '''
> +        # check if master bonding/each slaves queue configuration is the same.
> +        '''
> +        master = self.get_port_info(devices[0], 'queue_config')
> +        for port_id in devices[1:]:
> +            config = self.get_port_info(port_id, 'queue_config')
> +            if cmp(config, master) == 0:
> +                continue
> +            msg = ("slave bonded port [{0}] is "
> +                   "different to top bonded port [{1}]").format(port_id,
> +                                                                devices[0])
> +            raise VerifyFailure('check_stacked_device_queue_config ' + msg)
> +
> +    def set_stacked_bonded(self, slaveGrpOne, slaveGrpTwo, bond_mode):
> +        '''
> +        set stacked bonded mode for the specified bonding mode
> +        '''
> +        specified_socket = SOCKET_0
> +        # create bonded device 1, add slaves in it
> +        bond_port_1 = self.create_bonded_device(bond_mode, specified_socket)
> +        self.add_slave(bond_port_1, False, '', *slaveGrpOne)
> +        # create bonded device 2, add slaves in it
> +        bond_port_2 = self.create_bonded_device(bond_mode, specified_socket)
> +        self.add_slave(bond_port_2, False, '', *slaveGrpTwo)
> +        # create bonded device 3, which is the top bonded device
> +        bond_port_master = self.create_bonded_device(bond_mode,
> +                                                     specified_socket)
> +        # bond bonded device 2 to bonded device 4
> +        # and check bond 0/2 config status
> +        self.add_slave(bond_port_master, False, '', *[bond_port_1])
> +        # bond bonded device 3 to bonded device 4
> +        # and check bond 1/2 config status
> +        self.add_slave(bond_port_master, False, '', *[bond_port_2])
> +        # check if master bonding/each slaves queue configuration is the same.
> +        self.check_stacked_device_queue_config(*[bond_port_master,
> +                                                 bond_port_1, bond_port_2])
> +        # start all ports
> +        #self.start_ports()
> +
> +        return [bond_port_1, bond_port_2, bond_port_master]
> +
> +    def set_third_stacked_bonded(self, bond_port, bond_mode):
> +        '''
> +        set third level stacked bonded to check stacked limitation
> +        '''
> +        specified_socket = 0
> +        third_bond_port = self.create_bonded_device(bond_mode,
> +                                                    specified_socket)
> +        expected_str = 'Too many levels of bonding'
> +        try:
> +            self.add_slave(third_bond_port, True, expected_str, *[bond_port])
> +        except Exception as e:
> +            self.logger.warning(e)
> +        finally:
> +            pass
> +
> +    def duplicate_add_stacked_bonded(self, bond_port_1, bond_port_2,
> +                                     bond_port_master):
> +        '''
> +        check if adding stacked bonded duplicately is forbidden
> +        '''
> +        # check exception process
> +        expected_str = 'Slave device is already a slave of a bonded device'
> +        # bond bonded device 2 to bonded device 4
> +        # and check bond 0/2 config status
> +        self.add_slave(bond_port_master, False, expected_str, *[bond_port_1])
> +        # bond bonded device 3 to bonded device 4
> +        # and check bond 1/2 config status
> +        self.add_slave(bond_port_master, False, expected_str, *[bond_port_2])
> +
> +    def preset_stacked_bonded(self, slaveGrpOne, slaveGrpTwo, bond_mode):
> +        bond_port_1, bond_port_2, bond_port_master = self.set_stacked_bonded(
> +                                                                slaveGrpOne,
> +                                                                slaveGrpTwo,
> +                                                                bond_mode)
> +        # set test pmd
> +        cmds = []
> +        cmds = [["port stop all", '']]
> +        portList = [slaveGrpOne[0],
> +                    slaveGrpTwo[0],
> +                    bond_port_1,
> +                    bond_port_2,
> +                    bond_port_master]
> +        cmds += [["set portlist " + ",".join([str(port)
> +                                              for port in portList]), '']]
> +        cmds +=[["port start all", ' ', 15],
> +                ["start", '']]
> +        self.execute_testpmd_cmd(cmds)
> +        time.sleep(5)
> +
> +        return bond_port_1, bond_port_2, bond_port_master
> +
> +    def preset_normal_bonded(self, bond_mode):
> +        '''
> +        '''
> +        slaveGrpOne = self.slaveGrpOne
> +        slaveGrpTwo = self.slaveGrpTwo
> +
> +        bond_port_1, bond_port_2, bond_port_master = self.set_stacked_bonded(
> +                                                                 slaveGrpOne,
> +                                                                 slaveGrpTwo,
> +                                                                 bond_mode)
> +        # set test pmd
> +        cmds = []
> +        cmds += [["port stop all", '']]
> +        portList = [slaveGrpOne[0],
> +                    slaveGrpTwo[0],
> +                    bond_port_1,
> +                    bond_port_2,
> +                    bond_port_master]
> +        cmds += [["set portlist " + ",".join([str(port) for port in portList]),
> +                  '']]
> +        cmds +=[["port start all", ' ', 15],
> +                ["start", '']]
> +        self.execute_testpmd_cmd(cmds)
> +        time.sleep(5)
> +
> +    def check_packet_transmission(self):
> +        pkt_count = 100
> +        #  send 100 packet to bonded device 2 and check bond 2/4 statistics
> +        pkt_now, summary = self.send_default_packet_to_slave(
> +                                                unbound_port, bond_port_master,
> +                                                pkt_count=pkt_count, **slaves)
> +
> +        #  send 100 packet to bonded device 3 and check bond 3/4 statistics
> +        pkt_now, summary = self.send_default_packet_to_slave(
> +                                                unbound_port, bond_port_master,
> +                                                pkt_count=pkt_count, **slaves)
> +
> +    def convert_mac_str_into_int(self, mac_str):
> +        """
> +        Translate the MAC type from the string into the int.
> +        """
> +        mac_hex = '0x'
> +        for mac_part in mac_str.split(':'):
> +            mac_hex += mac_part
> +        return int(mac_hex, 16)
> +
> +    def mac_hash(self, dst_mac, src_mac):
> +        """
> +        Generate the hash value with the source and destination MAC.
> +        """
> +        dst_port_mac = self.convert_mac_str_into_int(dst_mac)
> +        src_port_mac = self.convert_mac_str_into_int(src_mac)
> +        src_xor_dest = dst_port_mac ^ src_port_mac
> +        xor_value_1 = src_xor_dest >> 32
> +        xor_value_2 = (src_xor_dest >> 16) ^ (xor_value_1 << 16)
> +        xor_value_3 = src_xor_dest ^ (xor_value_1 << 32) ^ (xor_value_2 << 16)
> +        return htons(xor_value_1 ^ xor_value_2 ^ xor_value_3)
> +
> +    def translate_ip_str_into_int(self, ip_str):
> +        """
> +        Translate the IP type from the string into the int.
> +        """
> +        ip_part_list = ip_str.split('.')
> +        ip_part_list.reverse()
> +        num = 0
> +        ip_int = 0
> +        for ip_part in ip_part_list:
> +            ip_part_int = int(ip_part) << (num * 8)
> +            ip_int += ip_part_int
> +            num += 1
> +        return ip_int
> +
> +    def ipv4_hash(self, dst_ip, src_ip):
> +        """
> +        Generate the hash value with the source and destination IP.
> +        """
> +        dst_ip_int = self.translate_ip_str_into_int(dst_ip)
> +        src_ip_int = self.translate_ip_str_into_int(src_ip)
> +        return htonl(dst_ip_int ^ src_ip_int)
> +
> +    def udp_hash(self, dst_port, src_port):
> +        """
> +        Generate the hash value with the source and destination port.
> +        """
> +        return htons(dst_port ^ src_port)
> +
> +    def policy_and_slave_hash(self, policy, **slaves):
> +        """
> +        Generate the hash value by the policy and active slave number.
> +        : policy:'L2' , 'L23' or 'L34'
> +        """
> +        source = self.get_static_ip_configs()
> +
> +        dst_mac, dst_ip, dst_port = self.dst_pkt_configs
> +
> +        hash_values = []
> +        if len(slaves['active']) == 0:
> +            return hash_values
> +
> +        for src_mac, src_ip, src_port in source:
> +            if policy == "L2":
> +                hash_value = self.mac_hash(dst_mac, src_mac)
> +            elif policy == "L23":
> +                hash_value = self.mac_hash(dst_mac, src_mac) ^ \
> +                             self.ipv4_hash(dst_ip, src_ip)
> +            else:
> +                hash_value = self.ipv4_hash(dst_ip, src_ip) ^ \
> +                             self.udp_hash(dst_port, src_port)
> +
> +            if policy in ("L23", "L34"):
> +                hash_value ^= hash_value >> 16
> +            hash_value ^= hash_value >> 8
> +            hash_value = hash_value % len(slaves['active'])
> +            hash_values.append(hash_value)
> +
> +        return hash_values
> +
> +    def slave_map_hash(self, port, order_ports):
> +        """
> +        Find the hash value by the given slave port id.
> +        """
> +        if len(order_ports) == 0:
> +            return None
> +        else:
> +            #order_ports = order_ports.split()
> +            return order_ports.index(str(port))
> +
> +    def verify_active_backup_rx(self, unbound_port, bond_port, **slaves):
> +        """
> +        Verify the RX packets are all correct in the active-backup mode.
> +        Parameters:
> +        """
> +        return
> +        pkt_count = 100
> +        pkt_now = {}
> +
> +        self.logger.info('verify_active_backup_rx')
> +        slave_num = slaves['active'].__len__()
> +        active_flag = 1 if slave_num != 0 else 0
> +        pkt_now, summary = self.send_default_packet_to_slave(
> +                                                         unbound_port,
> +                                                         bond_port,
> +                                                         pkt_count=pkt_count,
> +                                                       **slaves)
> +        results = []
> +        try:
> +            self.verify(pkt_now[bond_port][0] == pkt_count * slave_num,
> +                        "Not correct RX pkt on bond port in mode 1")
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        try:
> +            if unbound_port:
> +                self.verify(
> +                        pkt_now[unbound_port][0] == pkt_count * active_flag,
> +                        "Not correct TX pkt on unbound port in mode 1")
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        try:
> +            # check inactive slaves statistics
> +            for slave in slaves['inactive']:
> +                self.verify(pkt_now[slave][0] == 0,
> +                            "Not correct RX pkt on inactive port in mode 1")
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        try:
> +            # check active slaves statistics
> +            for slave in slaves['active']:
> +                self.verify(pkt_now[slave][0] == pkt_count,
> +                            "Not correct RX pkt on active port in mode 1")
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        if not results:
> +            for item in results:
> +                self.logger.error(e)
> +            raise VerifyFailure("verify_active_backup_rx exception")
> +
> +        return pkt_now, summary
> +
> +    def verify_stacked_active_backup_rx(self, master_bonded, slave_bonded,
> +                                        **slaves):
> +        """
> +        Verify the RX packets are all correct in the active-backup mode with
> +        stacked bonded device. second level bonded device's statistics should
> +        be the sum of slave bonded devices statistics.
> +        """
> +        pkt_count = 100
> +        pkt_now = {}
> +
> +        self.logger.info('verify_stacked_active_backup_rx')
> +        slave_num = slaves['active'].__len__()
> +        active_flag = 1 if slave_num != 0 else 0
> +        pkt_now, summary = self.send_default_packet_to_slave(
> +                                                         None,
> +                                                         slave_bonded,
> +                                                         pkt_count=pkt_count,
> +                                                       **slaves)
> +        results = []
> +        try:
> +            self.verify(pkt_now[slave_bonded][0] == pkt_count * slave_num,
> +                    "Not correct RX pkt on bond port in mode 1")
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        try:
> +            if master_bonded:
> +                self.verify(
> +                    pkt_now[master_bonded][0] == pkt_count * active_flag,
> +                    "Not correct TX pkt on unbound port in mode 1")
> +            pass
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        try:
> +            # check inactive slaves statistics
> +            for slave in slaves['inactive']:
> +                self.verify(pkt_now[slave][0] == 0,
> +                            "Not correct RX pkt on inactive port in mode 1")
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        try:
> +            # check active slaves statistics
> +            for slave in slaves['active']:
> +                self.verify(pkt_now[slave][0] == pkt_count,
> +                            "Not correct RX pkt on active port in mode 1")
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        if results:
> +            for item in results:
> +                self.logger.error(e)
> +            raise VerifyFailure("verify_stacked_active_backup_rx exception")
> +
> +        return pkt_now, summary
> +
> +    def verify_xor_rx(self, unbound_port, bond_port, **slaves):
> +        """
> +        Verify receiving the pcakets correctly in the XOR mode.
> +        Parameters:
> +        : unbound_port: the unbonded port id
> +        : bond_port: the bonded device port id
> +        """
> +        pkt_count = 100
> +        pkt_now = {}
> +
> +        pkt_now, summary = self.send_default_packet_to_slave(
> +                                                         unbound_port,
> +                                                         bond_port,
> +                                                         pkt_count=pkt_count,
> +                                                         **slaves)
> +
> +        for slave in slaves['active']:
> +            self.verify(pkt_now[slave][0] == pkt_count,
> +                        "Slave have error RX packet in XOR")
> +        for slave in slaves['inactive']:
> +            self.verify(pkt_now[slave][0] == 0,
> +                        "Slave have error RX packet in XOR")
> +
> +        if unbound_port:
> +            self.verify(
> +                pkt_now[unbound_port][0] == pkt_count * len(slaves['active']),
> +                "Unbonded device have error TX packet in XOR")
> +
> +    def verify_xor_tx(self, unbound_port, bond_port, policy,
> +                      vlan_tag=False, **slaves):
> +        """
> +        Verify that transmitting the packets correctly in the XOR mode.
> +        Parameters:
> +        : unbound_port: the unbonded port id
> +        : bond_port: the bonded device port id
> +        : policy:'L2' , 'L23' or 'L34'
> +        : vlan_tag: False or True
> +        """
> +        pkt_count = 100
> +        pkt_now = {}
> +        try:
> +            pkt_now, summary = self.send_customized_packet_to_unbound_port(
> +                                                        unbound_port,
> +                                                        bond_port,
> +                                                        policy,
> +                                                        vlan_tag=False,
> +                                                        pkt_count=pkt_count,
> +                                                        **slaves)
> +        except Exception as e:
> +            self.logger.error(traceback.format_exc())
> +        finally:
> +            pass
> +
> +        hash_values = self.policy_and_slave_hash(policy, **slaves)
> +        order_ports = self.get_bonding_info(bond_port, 'active_slaves')
> +        for slave in slaves['active']:
> +            slave_map_hash = self.slave_map_hash(slave, order_ports)
> +            self.verify(
> +            pkt_now[slave][0] == pkt_count * hash_values.count(slave_map_hash),
> +            "XOR load balance transmit error on the link up port")
> +        for slave in slaves['inactive']:
> +            self.verify(
> +            pkt_now[slave][0] == 0,
> +            "XOR load balance transmit error on the link down port")
> +
> +    @property
> +    def driver(self):
> +        return self.kdriver
> +
> +    #
> +    # Test cases.
> +    #
> +    def set_up_all(self):
> +        """
> +        Run before each test suite
> +        """
> +        self.verify('bsdapp' not in self.target,
> +                     "Bonding not support freebsd")
> +        self.dut_ports = self.dut.get_ports()
> +
> +        self.port_mask = utils.create_mask(self.dut_ports)
> +
> +        #self.verify(len(self.dut_ports) >= 4, "Insufficient ports")
> +        self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
> +
> +        self.all_cores_mask = utils.create_mask(self.dut.get_core_list("all"))
> +
> +        # separate ports into two group
> +        sep_index = len(self.dut_ports)/2
> +        self.slaveGrpOne = self.dut_ports[:sep_index]
> +        self.slaveGrpTwo = self.dut_ports[sep_index:]
> +        # testpmd initialization
> +        self.testpmd = PmdOutput(self.dut)
> +        self.testpmd_status = 'close'
> +        self.tester_bond = "bond0"
> +        self.dst_pkt_configs = []
> +
> +    def tear_down_all(self):
> +        """
> +        Run after each test suite.
> +        """
> +        pass
> +
> +    def set_up(self):
> +        """
> +        Run before each test case.
> +        """
> +        pass
> +
> +    def tear_down(self):
> +        """
> +        Run after each test case.
> +        """
> +
> +    def backup_check_stacked_bonded_traffic(self):
> +        '''
> +        '''
> +        self.logger.info("begin checking stacked bonded transmission")
> +        self.start_testpmd()
> +        slaveGrpOne = self.slaveGrpOne
> +        slaveGrpTwo = self.slaveGrpTwo
> +        bond_port_1, bond_port_2, bond_port_master = \
> +                                                self.preset_stacked_bonded(
> +                                                            slaveGrpOne,
> +                                                            slaveGrpTwo,
> +                                                            MODE_ACTIVE_BACKUP)
> +        results = []
> +        # check first bonded device
> +        try:
> +            self.logger.info('check first bonded device')
> +            slaves = {}
> +            slaves['active'] = slaveGrpOne
> +            slaves['inactive'] = []
> +            self.verify_stacked_active_backup_rx(None, bond_port_1, **slaves)
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        try:
> +            # check second bonded device
> +            self.logger.info('check second bonded device')
> +            slaves = {}
> +            slaves['active'] = slaveGrpTwo
> +            slaves['inactive'] = []
> +            self.verify_stacked_active_backup_rx(None, bond_port_2, **slaves)
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        pkt_now, summary = 0, 0
> +        try:
> +            self.logger.info('check master bonded device')
> +            # check top bonded device
> +            slaves = {}
> +            slaves['active'] = slaveGrpOne + slaveGrpTwo
> +            slaves['inactive'] = []
> +            pkt_now, summary = self.verify_stacked_active_backup_rx(None,
> +                                                    bond_port_master, **slaves)
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            pass
> +
> +        self.close_testpmd()
> +        if results:
> +            for item in results:
> +                self.logger.error(e)
> +            msg = "backup_check_stacked_bonded_traffic exception"
> +            raise VerifyFailure(msg)
> +        else:
> +            return pkt_now, summary
> +
> +    def backup_check_stacked_one_slave_down(self):
> +        self.logger.info("begin checking backup stacked one slave down")
> +        self.start_testpmd()
> +        #-------------------------------
> +        # boot up testpmd
> +        mode = MODE_ACTIVE_BACKUP
> +        slaveGrpOne = self.slaveGrpOne
> +        slaveGrpTwo = self.slaveGrpTwo
> +        bond_port_1, bond_port_2, bond_port_master = \
> +                                        self.preset_stacked_bonded(slaveGrpOne,
> +                                                                   slaveGrpTwo,
> +                                                                   mode)
> +        results = []
> +        # check packet transmission when one slave of first level bonded device
> +        #-------------------------------
> +        pkt_now, summary = 0, 0
> +        try:
> +            # show one slave port of first level bonded device
> +            primary_slave = slaveGrpOne[0]
> +            self.logger.info("down one slave port of bond_port_1")
> +            self.config_tester_port_by_number(primary_slave, "down")
> +            primary_port = self.get_bonding_info(bond_port_1, 'primary')
> +            msg = "down status slave hasn't been remove from active slaves"
> +            self.verify(primary_port != primary_slave, msg)
> +            #------------------------------------------------------
> +            slaves = {}
> +            primary_port, active_slaves = self.get_active_slaves(primary_slave,
> +                                                                 bond_port_1)
> +            slaves['active'] = [primary_port]
> +            slaves['active'].extend(active_slaves)
> +            slaves['inactive'] = [self.dut_ports[primary_slave]]
> +            pkt_now, summary = self.verify_stacked_active_backup_rx(
> +                                                            None,
> +                                                            bond_port_master,
> +                                                            **slaves)
> +        except Exception as e:
> +            results.append(e)
> +        finally:
> +            self.config_tester_port_by_number(primary_slave, "up")
> +
> +        self.close_testpmd()
> +        if results:
> +            for item in results:
> +                self.logger.error(e)
> +            raise VerifyFailure("backup_check_stacked_one_slave_down exception")
> +
> +    def check_stacked_xor_rx(self):
> +        results = []
> +        self.logger.info("begin checking xor stacked transmission")
> +        self.start_testpmd()
> +        #-------------------------------------
> +        #-------------------------------
> +        # boot up testpmd
> +        try:
> +            mode = MODE_XOR_BALANCE
> +            slaveGrpOne = self.slaveGrpOne
> +            slaveGrpTwo = self.slaveGrpTwo
> +            bond_port_1, bond_port_2, bond_port_master = \
> +                self.preset_stacked_bonded( slaveGrpOne, slaveGrpTwo, mode)
> +            slaves = {}
> +            slaves['active'] = slaveGrpOne + slaveGrpTwo
> +            slaves['inactive'] = []
> +
> +            self.verify_xor_rx(None, bond_port_master, **slaves)
> +        except Exception as e:
> +            results.append(e)
> +            self.logger.error(traceback.format_exc())
> +        finally:
> +            self.close_testpmd()
> +        #-------------------------------------
> +        if results:
> +            for item in results:
> +                self.logger.error(e)
> +            raise VerifyFailure("check_stacked_xor_rx exception !")
> +
> +    def xor_check_stacked_rx_one_slave_down(self):
> +        """
> +        Verify that transmitting packets correctly in the XOR mode,
> +        when bringing any one slave of the bonding device link down.
> +        """
> +        results = []
> +        self.logger.info("begin xor_check_stacked_rx_one_slave_down")
> +        #-------------------------------
> +        # boot up testpmd
> +        self.start_testpmd()
> +        try:
> +            #-------------------------------
> +            mode = MODE_ACTIVE_BACKUP
> +            slaveGrpOne = self.slaveGrpOne
> +            #slaveGrpTwo = self.slaveGrpTwo[:-1]
> +            slaveGrpTwo = self.slaveGrpTwo
> +            bond_port_1, bond_port_2, bond_port_master = \
> +                    self.preset_stacked_bonded( slaveGrpOne,
> +                                                slaveGrpTwo,
> +                                                mode)
> +            primary_slave = slaveGrpOne[0]
> +            self.logger.info("down one slave port of bond_port_1")
> +            self.config_tester_port_by_number(primary_slave, "down")
> +            primary_port = self.get_bonding_info(bond_port_1, 'primary')
> +            msg = "down status slave hasn't been remove from active slaves"
> +            self.verify(primary_port != primary_slave, msg)
> +            #-------------------------
> +            # add first bonded
> +            primary_port, active_slaves = self.get_active_slaves(primary_slave,
> +                                                                 bond_port_1)
> +            slaves = {}
> +            slaves['active'] = [primary_port]
> +            slaves['active'].extend(active_slaves)
> +            # add second bonded
> +            primary_slave = slaveGrpOne[0]
> +            primary_port_2, active_slaves_2 = self.get_active_slaves(
> +                                                            primary_slave,
> +                                                            bond_port_2)
> +            slaves['active'] += [primary_port_2]
> +            slaves['active'] += active_slaves_2
> +            if primary_slave in slaves['active']:
> +                slaves['active'].remove(primary_slave)
> +                msg = "{0} should not be in active slaves list".format(
> +                                                                primary_slave)
> +                self.logger.warning()
> +
> +            slaves['inactive'] = []#[primary_slave]
> +#             self.verify_xor_tx(primary_slave,
> +#                                bond_port_master,
> +#                                "L2", False,
> +#                                **slaves)
> +            self.verify_xor_rx(None, bond_port_master, **slaves)
> +        except Exception as e:
> +            results.append(e)
> +            self.logger.error(traceback.format_exc())
> +        finally:
> +            self.config_tester_port_by_number(primary_slave, "up")
> +            self.close_testpmd()
> +
> +        if results:
> +            for item in results:
> +                self.logger.error(e)
> +            raise VerifyFailure("xor_check_stacked_rx_one_slave_down failed")
> +        else:
> +            #return pkt_now, summary
> +            pass
> +
> +    def test_basic_behav(self):
> +        '''
> +        allow a bonded device to be added to another bonded device.
> +        There's two limitations to create master bonding:
> +
> +         - Total depth of nesting is limited to two levels,
> +         - 802.3ad mode is not supported if one or more slaves is a bond device
> +
> +        note: There 802.3ad mode can not be supported on this bond device.
> +
> +        This case is aimed at testing basic stacked bonded device commands.
> +
> +        '''
> +        #------------------------------------------------
> +        # check stacked bonded status, except mode 4 (802.3ad)
> +        mode_list =[MODE_ROUND_ROBIN,
> +                    MODE_ACTIVE_BACKUP,
> +                    MODE_XOR_BALANCE,
> +                    MODE_BROADCAST,
> +                    MODE_TLB_BALANCE,
> +                    MODE_ALB_BALANCE]
> +        slaveGrpOne = self.slaveGrpOne
> +        slaveGrpTwo = self.slaveGrpTwo
> +        check_result = []
> +        for bond_mode in mode_list:
> +            self.logger.info("begin mode <{0}> checking".format(bond_mode))
> +            # boot up testpmd
> +            self.start_testpmd()
> +            try:
> +                self.logger.info("check bonding mode <{0}>".format(bond_mode))
> +                # set up stacked bonded status
> +                bond_port_1, bond_port_2, bond_port_master = \
> +                    self.set_stacked_bonded(slaveGrpOne, slaveGrpTwo,
> +                                            bond_mode)
> +                # check duplicate add slave
> +                self.duplicate_add_stacked_bonded(bond_port_1, bond_port_2,
> +                                                  bond_port_master)
> +                # check stacked limitation
> +                self.set_third_stacked_bonded(bond_port_master, bond_mode)
> +                # quit testpmd, it is not supported to reset testpmd
> +                self.logger.info("mode <{0}> done !".format(bond_mode))
> +                check_result.append([bond_mode, None])
> +            except Exception as e:
> +                check_result.append([bond_mode, e])
> +                self.logger.error(e)
> +            finally:
> +                self.close_testpmd()
> +                time.sleep(5)
> +        #------------
> +        # 802.3ad mode is not supported
> +        # if one or more slaves is a bond device
> +        # so it should raise a exeption
> +        msg = ''
> +        try:
> +            # boot up testpmd
> +            self.start_testpmd()
> +            # set up stacked bonded status
> +            self.set_stacked_bonded(slaveGrpOne, slaveGrpTwo, MODE_LACP)
> +            # quit testpmd, it is not supported to reset testpmd
> +            msg = ("802.3ad mode hasn't been forbidden to "
> +                   "use stacked bonded setting")
> +            check_result.append([MODE_LACP, msg])
> +        except Exception as e:
> +            check_result.append([MODE_LACP, None])
> +        finally:
> +            self.close_testpmd()
> +
> +        exception_count = 0
> +        for bond_mode, e in check_result:
> +            msg = "mode <{0}>".format(bond_mode)
> +            if e:
> +                self.logger.info(msg)
> +                self.logger.error(e)
> +                exception_count += 1
> +            else:
> +                self.logger.info(msg + ' done !')
> +        # if some checking item is failed, raise exception
> +        if exception_count:
> +            raise VerifyFailure('some test items failed')
> +        else:
> +            self.logger.info('all test items have done !')
> +        self.close_testpmd()
> +
> +    def test_mode_backup_rx(self):
> +        """
> +        Verify receiving and transmitting the packets correctly
> +        in the active-backup mode.
> +        """
> +        stacked_stats = self.backup_check_stacked_bonded_traffic()
> +
> +    def test_mode_backup_one_slave_down(self):
> +        """
> +        Verify that receiving and transmitting the pcakets correctly
> +        in the active-backup mode, when bringing any one slave of
> +        the bonding device link down.
> +        """
> +        if len(self.dut_ports) >= 4:
> +            self.backup_check_stacked_one_slave_down()
> +        else:
> +            msg = "ports less than 2, ignore stacked one slave down check"
> +            self.logger.warning(msg)
> +
> +    def test_mode_xor_rx(self):
> +        """
> +        Verify that receiving packets correctly in the XOR mode.
> +        """
> +        self.check_stacked_xor_rx()
> +
> +    def test_mode_xor_rx_one_slave_down(self):
> +        """
> +        Verify that transmitting packets correctly in the XOR mode,
> +        when bringing any one slave of the bonding device link down.
> +        """
> +        if len(self.dut_ports) >= 4:
> +            self.xor_check_stacked_rx_one_slave_down()
> +        else:
> +            msg = "ports less than 2, ignore stacked one slave down check"
> +            self.logger.warning(msg)
> +
> --
> 1.9.3

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-01-21  7:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-06  5:37 [dts] [PATCH V1 0/2] pmd_stacked_bonded: upload test plan and automation script yufengx.mo
2018-06-06  5:37 ` [dts] [PATCH V1 1/2] pmd_stacked_bonded: upload test plan yufengx.mo
2018-06-06  5:37 ` [dts] [PATCH V1 2/2] pmd_stacked_bonded: upload automation script yufengx.mo
2019-01-21  7:17   ` Chen, Zhaoyan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).