DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC PATCH v1 00/18] merge DTS component files to DPDK
@ 2022-04-06 15:04 Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 01/18] dts: merge DTS framework/crb.py " Juraj Linkeš
                   ` (17 more replies)
  0 siblings, 18 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

Files related to various DTS components (dut, tester etc.).

Outstanding items:
possibly move nics/net_device.py to framework.

Juraj Linkeš (18):
  dts: merge DTS framework/crb.py to DPDK
  dts: merge DTS framework/dut.py to DPDK
  dts: merge DTS framework/ixia_buffer_parser.py to DPDK
  dts: merge DTS framework/pktgen.py to DPDK
  dts: merge DTS framework/pktgen_base.py to DPDK
  dts: merge DTS framework/pktgen_ixia.py to DPDK
  dts: merge DTS framework/pktgen_ixia_network.py to DPDK
  dts: merge DTS framework/pktgen_trex.py to DPDK
  dts: merge DTS framework/ssh_connection.py to DPDK
  dts: merge DTS framework/ssh_pexpect.py to DPDK
  dts: merge DTS framework/tester.py to DPDK
  dts: merge DTS framework/ixia_network/__init__.py to DPDK
  dts: merge DTS framework/ixia_network/ixnet.py to DPDK
  dts: merge DTS framework/ixia_network/ixnet_config.py to DPDK
  dts: merge DTS framework/ixia_network/ixnet_stream.py to DPDK
  dts: merge DTS framework/ixia_network/packet_parser.py to DPDK
  dts: merge DTS nics/__init__.py to DPDK
  dts: merge DTS nics/net_device.py to DPDK

 dts/framework/crb.py                        | 1061 +++++++++++
 dts/framework/dut.py                        | 1727 +++++++++++++++++
 dts/framework/ixia_buffer_parser.py         |  138 ++
 dts/framework/ixia_network/__init__.py      |  183 ++
 dts/framework/ixia_network/ixnet.py         |  901 +++++++++
 dts/framework/ixia_network/ixnet_config.py  |   42 +
 dts/framework/ixia_network/ixnet_stream.py  |  366 ++++
 dts/framework/ixia_network/packet_parser.py |   96 +
 dts/framework/pktgen.py                     |  234 +++
 dts/framework/pktgen_base.py                |  740 ++++++++
 dts/framework/pktgen_ixia.py                | 1869 +++++++++++++++++++
 dts/framework/pktgen_ixia_network.py        |  225 +++
 dts/framework/pktgen_trex.py                |  908 +++++++++
 dts/framework/ssh_connection.py             |  117 ++
 dts/framework/ssh_pexpect.py                |  263 +++
 dts/framework/tester.py                     |  910 +++++++++
 dts/nics/__init__.py                        |   30 +
 dts/nics/net_device.py                      | 1013 ++++++++++
 18 files changed, 10823 insertions(+)
 create mode 100644 dts/framework/crb.py
 create mode 100644 dts/framework/dut.py
 create mode 100644 dts/framework/ixia_buffer_parser.py
 create mode 100644 dts/framework/ixia_network/__init__.py
 create mode 100644 dts/framework/ixia_network/ixnet.py
 create mode 100644 dts/framework/ixia_network/ixnet_config.py
 create mode 100644 dts/framework/ixia_network/ixnet_stream.py
 create mode 100644 dts/framework/ixia_network/packet_parser.py
 create mode 100644 dts/framework/pktgen.py
 create mode 100644 dts/framework/pktgen_base.py
 create mode 100644 dts/framework/pktgen_ixia.py
 create mode 100644 dts/framework/pktgen_ixia_network.py
 create mode 100644 dts/framework/pktgen_trex.py
 create mode 100644 dts/framework/ssh_connection.py
 create mode 100644 dts/framework/ssh_pexpect.py
 create mode 100644 dts/framework/tester.py
 create mode 100644 dts/nics/__init__.py
 create mode 100644 dts/nics/net_device.py

-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 01/18] dts: merge DTS framework/crb.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 02/18] dts: merge DTS framework/dut.py " Juraj Linkeš
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/crb.py | 1061 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1061 insertions(+)
 create mode 100644 dts/framework/crb.py

diff --git a/dts/framework/crb.py b/dts/framework/crb.py
new file mode 100644
index 0000000000..a15d15e9a2
--- /dev/null
+++ b/dts/framework/crb.py
@@ -0,0 +1,1061 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import os
+import re
+import time
+
+from .config import PORTCONF, PktgenConf, PortConf
+from .logger import getLogger
+from .settings import TIMEOUT
+from .ssh_connection import SSHConnection
+
+"""
+CRB (customer reference board) basic functions and handlers
+"""
+
+
+class Crb(object):
+
+    """
+    Basic module for customer reference board. This module implement functions
+    interact with CRB. With these function, we can get the information of
+    CPU/PCI/NIC on the board and setup running environment for DPDK.
+    """
+
+    PCI_DEV_CACHE_KEY = None
+    NUMBER_CORES_CACHE_KEY = None
+    CORE_LIST_CACHE_KEY = None
+
+    def __init__(self, crb, serializer, dut_id=0, name=None, alt_session=True):
+        self.dut_id = dut_id
+        self.crb = crb
+        self.read_cache = False
+        self.skip_setup = False
+        self.serializer = serializer
+        self.ports_info = []
+        self.sessions = []
+        self.stage = "pre-init"
+        self.name = name
+        self.trex_prefix = None
+        self.default_hugepages_cleared = False
+        self.prefix_list = []
+
+        self.logger = getLogger(name)
+        self.session = SSHConnection(
+            self.get_ip_address(),
+            name,
+            self.get_username(),
+            self.get_password(),
+            dut_id,
+        )
+        self.session.init_log(self.logger)
+        if alt_session:
+            self.alt_session = SSHConnection(
+                self.get_ip_address(),
+                name + "_alt",
+                self.get_username(),
+                self.get_password(),
+                dut_id,
+            )
+            self.alt_session.init_log(self.logger)
+        else:
+            self.alt_session = None
+
+    def get_ip_address(self):
+        """
+        Get CRB's ip address.
+        """
+        raise NotImplementedError
+
+    def get_password(self):
+        """
+        Get CRB's login password.
+        """
+        raise NotImplementedError
+
+    def get_username(self):
+        """
+        Get CRB's login username.
+        """
+        raise NotImplementedError
+
+    def send_expect(
+        self,
+        cmds,
+        expected,
+        timeout=TIMEOUT,
+        alt_session=False,
+        verify=False,
+        trim_whitespace=True,
+    ):
+        """
+        Send commands to crb and return string before expected string. If
+        there's no expected string found before timeout, TimeoutException will
+        be raised.
+
+        By default, it will trim the whitespace from the expected string. This
+        behavior can be turned off via the trim_whitespace argument.
+        """
+
+        if trim_whitespace:
+            expected = expected.strip()
+
+        # sometimes there will be no alt_session like VM dut
+        if alt_session and self.alt_session:
+            return self.alt_session.session.send_expect(cmds, expected, timeout, verify)
+
+        return self.session.send_expect(cmds, expected, timeout, verify)
+
+    def create_session(self, name=""):
+        """
+        Create new session for additional usage. This session will not enable log.
+        """
+        logger = getLogger(name)
+        session = SSHConnection(
+            self.get_ip_address(),
+            name,
+            self.get_username(),
+            self.get_password(),
+            dut_id=self.dut_id,
+        )
+        session.init_log(logger)
+        self.sessions.append(session)
+        return session
+
+    def destroy_session(self, session=None):
+        """
+        Destroy additional session.
+        """
+        for save_session in self.sessions:
+            if save_session == session:
+                save_session.close(force=True)
+                logger = getLogger(save_session.name)
+                logger.logger_exit()
+                self.sessions.remove(save_session)
+                break
+
+    def reconnect_session(self, alt_session=False):
+        """
+        When session can't used anymore, recreate another one for replace
+        """
+        try:
+            if alt_session:
+                self.alt_session.close(force=True)
+            else:
+                self.session.close(force=True)
+        except Exception as e:
+            self.logger.error("Session close failed for [%s]" % e)
+
+        if alt_session:
+            session = SSHConnection(
+                self.get_ip_address(),
+                self.name + "_alt",
+                self.get_username(),
+                self.get_password(),
+            )
+            self.alt_session = session
+        else:
+            session = SSHConnection(
+                self.get_ip_address(),
+                self.name,
+                self.get_username(),
+                self.get_password(),
+            )
+            self.session = session
+
+        session.init_log(self.logger)
+
+    def send_command(self, cmds, timeout=TIMEOUT, alt_session=False):
+        """
+        Send commands to crb and return string before timeout.
+        """
+
+        if alt_session and self.alt_session:
+            return self.alt_session.session.send_command(cmds, timeout)
+
+        return self.session.send_command(cmds, timeout)
+
+    def get_session_output(self, timeout=TIMEOUT):
+        """
+        Get session output message before timeout
+        """
+        return self.session.get_session_before(timeout)
+
+    def set_test_types(self, func_tests, perf_tests):
+        """
+        Enable or disable function/performance test.
+        """
+        self.want_func_tests = func_tests
+        self.want_perf_tests = perf_tests
+
+    def get_total_huge_pages(self):
+        """
+        Get the huge page number of CRB.
+        """
+        huge_pages = self.send_expect(
+            "awk '/HugePages_Total/ { print $2 }' /proc/meminfo", "# ", alt_session=True
+        )
+        if huge_pages != "":
+            return int(huge_pages.split()[0])
+        return 0
+
+    def mount_huge_pages(self):
+        """
+        Mount hugepage file system on CRB.
+        """
+        self.send_expect("umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts`", "# ")
+        out = self.send_expect("awk '/hugetlbfs/ { print $2 }' /proc/mounts", "# ")
+        # only mount hugepage when no hugetlbfs mounted
+        if not len(out):
+            self.send_expect("mkdir -p /mnt/huge", "# ")
+            self.send_expect("mount -t hugetlbfs nodev /mnt/huge", "# ")
+
+    def strip_hugepage_path(self):
+        mounts = self.send_expect("cat /proc/mounts |grep hugetlbfs", "# ")
+        infos = mounts.split()
+        if len(infos) >= 2:
+            return infos[1]
+        else:
+            return ""
+
+    def set_huge_pages(self, huge_pages, numa=""):
+        """
+        Set numbers of huge pages
+        """
+        page_size = self.send_expect(
+            "awk '/Hugepagesize/ {print $2}' /proc/meminfo", "# "
+        )
+
+        if not numa:
+            self.send_expect(
+                "echo %d > /sys/kernel/mm/hugepages/hugepages-%skB/nr_hugepages"
+                % (huge_pages, page_size),
+                "# ",
+                5,
+            )
+        else:
+            # sometimes we set hugepage on kernel cmdline, so we clear it
+            if not self.default_hugepages_cleared:
+                self.send_expect(
+                    "echo 0 > /sys/kernel/mm/hugepages/hugepages-%skB/nr_hugepages"
+                    % (page_size),
+                    "# ",
+                    5,
+                )
+                self.default_hugepages_cleared = True
+
+            # some platform not support numa, example vm dut
+            try:
+                self.send_expect(
+                    "echo %d > /sys/devices/system/node/%s/hugepages/hugepages-%skB/nr_hugepages"
+                    % (huge_pages, numa, page_size),
+                    "# ",
+                    5,
+                )
+            except:
+                self.logger.warning("set %d hugepage on %s error" % (huge_pages, numa))
+                self.send_expect(
+                    "echo %d > /sys/kernel/mm/hugepages/hugepages-%skB/nr_hugepages"
+                    % (huge_pages.page_size),
+                    "# ",
+                    5,
+                )
+
+    def set_speedup_options(self, read_cache, skip_setup):
+        """
+        Configure skip network topology scan or skip DPDK packet setup.
+        """
+        self.read_cache = read_cache
+        self.skip_setup = skip_setup
+
+    def set_directory(self, base_dir):
+        """
+        Set DPDK package folder name.
+        """
+        self.base_dir = base_dir
+
+    def admin_ports(self, port, status):
+        """
+        Force set port's interface status.
+        """
+        admin_ports_freebsd = getattr(
+            self, "admin_ports_freebsd_%s" % self.get_os_type()
+        )
+        return admin_ports_freebsd()
+
+    def admin_ports_freebsd(self, port, status):
+        """
+        Force set remote interface link status in FreeBSD.
+        """
+        eth = self.ports_info[port]["intf"]
+        self.send_expect("ifconfig %s %s" % (eth, status), "# ", alt_session=True)
+
+    def admin_ports_linux(self, eth, status):
+        """
+        Force set remote interface link status in Linux.
+        """
+        self.send_expect("ip link set  %s %s" % (eth, status), "# ", alt_session=True)
+
+    def pci_devices_information(self):
+        """
+        Scan CRB pci device information and save it into cache file.
+        """
+        if self.read_cache:
+            self.pci_devices_info = self.serializer.load(self.PCI_DEV_CACHE_KEY)
+
+        if not self.read_cache or self.pci_devices_info is None:
+            self.pci_devices_information_uncached()
+            self.serializer.save(self.PCI_DEV_CACHE_KEY, self.pci_devices_info)
+
+    def pci_devices_information_uncached(self):
+        """
+        Scan CRB NIC's information on different OS.
+        """
+        pci_devices_information_uncached = getattr(
+            self, "pci_devices_information_uncached_%s" % self.get_os_type()
+        )
+        return pci_devices_information_uncached()
+
+    def pci_devices_information_uncached_linux(self):
+        """
+        Look for the NIC's information (PCI Id and card type).
+        """
+        out = self.send_expect("lspci -Dnn | grep -i eth", "# ", alt_session=True)
+        rexp = r"([\da-f]{4}:[\da-f]{2}:[\da-f]{2}.\d{1}) .*Eth.*?ernet .*?([\da-f]{4}:[\da-f]{4})"
+        pattern = re.compile(rexp)
+        match = pattern.findall(out)
+        self.pci_devices_info = []
+
+        obj_str = str(self)
+        if "VirtDut" in obj_str:
+            # there is no port.cfg in VM, so need to scan all pci in VM.
+            pass
+        else:
+            # only scan configured pcis
+            portconf = PortConf(PORTCONF)
+            portconf.load_ports_config(self.crb["IP"])
+            configed_pcis = portconf.get_ports_config()
+            if configed_pcis:
+                if "tester" in str(self):
+                    tester_pci_in_cfg = []
+                    for item in list(configed_pcis.values()):
+                        for pci_info in match:
+                            if item["peer"] == pci_info[0]:
+                                tester_pci_in_cfg.append(pci_info)
+                    match = tester_pci_in_cfg[:]
+                else:
+                    dut_pci_in_cfg = []
+                    for key in list(configed_pcis.keys()):
+                        for pci_info in match:
+                            if key == pci_info[0]:
+                                dut_pci_in_cfg.append(pci_info)
+                    match = dut_pci_in_cfg[:]
+                # keep the original pci sequence
+                match = sorted(match)
+            else:
+                # INVALID CONFIG FOR NO PCI ADDRESS!!! eg: port.cfg for freeBSD
+                pass
+
+        for i in range(len(match)):
+            # check if device is cavium and check its linkspeed, append only if it is 10G
+            if "177d:" in match[i][1]:
+                linkspeed = "10000"
+                nic_linkspeed = self.send_expect(
+                    "cat /sys/bus/pci/devices/%s/net/*/speed" % match[i][0],
+                    "# ",
+                    alt_session=True,
+                )
+                if nic_linkspeed.split()[0] == linkspeed:
+                    self.pci_devices_info.append((match[i][0], match[i][1]))
+            else:
+                self.pci_devices_info.append((match[i][0], match[i][1]))
+
+    def pci_devices_information_uncached_freebsd(self):
+        """
+        Look for the NIC's information (PCI Id and card type).
+        """
+        out = self.send_expect("pciconf -l", "# ", alt_session=True)
+        rexp = r"pci0:([\da-f]{1,3}:[\da-f]{1,2}:\d{1}):\s*class=0x020000.*0x([\da-f]{4}).*8086"
+        pattern = re.compile(rexp)
+        match = pattern.findall(out)
+
+        self.pci_devices_info = []
+        for i in range(len(match)):
+            card_type = "8086:%s" % match[i][1]
+            self.pci_devices_info.append((match[i][0], card_type))
+
+    def get_pci_dev_driver(self, domain_id, bus_id, devfun_id):
+        """
+        Get the driver of specified pci device.
+        """
+        get_pci_dev_driver = getattr(self, "get_pci_dev_driver_%s" % self.get_os_type())
+        return get_pci_dev_driver(domain_id, bus_id, devfun_id)
+
+    def get_pci_dev_driver_linux(self, domain_id, bus_id, devfun_id):
+        """
+        Get the driver of specified pci device on linux.
+        """
+        out = self.send_expect(
+            "cat /sys/bus/pci/devices/%s\:%s\:%s/uevent"
+            % (domain_id, bus_id, devfun_id),
+            "# ",
+            alt_session=True,
+        )
+        rexp = r"DRIVER=(.+?)\r"
+        pattern = re.compile(rexp)
+        match = pattern.search(out)
+        if not match:
+            return None
+        return match.group(1)
+
+    def get_pci_dev_driver_freebsd(self, domain_id, bus_id, devfun_id):
+        """
+        Get the driver of specified pci device.
+        """
+        return True
+
+    def get_pci_dev_id(self, domain_id, bus_id, devfun_id):
+        """
+        Get the pci id of specified pci device.
+        """
+        get_pci_dev_id = getattr(self, "get_pci_dev_id_%s" % self.get_os_type())
+        return get_pci_dev_id(domain_id, bus_id, devfun_id)
+
+    def get_pci_dev_id_linux(self, domain_id, bus_id, devfun_id):
+        """
+        Get the pci id of specified pci device on linux.
+        """
+        out = self.send_expect(
+            "cat /sys/bus/pci/devices/%s\:%s\:%s/uevent"
+            % (domain_id, bus_id, devfun_id),
+            "# ",
+            alt_session=True,
+        )
+        rexp = r"PCI_ID=(.+)"
+        pattern = re.compile(rexp)
+        match = re.search(pattern, out)
+        if not match:
+            return None
+        return match.group(1)
+
+    def get_device_numa(self, domain_id, bus_id, devfun_id):
+        """
+        Get numa number of specified pci device.
+        """
+        get_device_numa = getattr(self, "get_device_numa_%s" % self.get_os_type())
+        return get_device_numa(domain_id, bus_id, devfun_id)
+
+    def get_device_numa_linux(self, domain_id, bus_id, devfun_id):
+        """
+        Get numa number of specified pci device on Linux.
+        """
+        numa = self.send_expect(
+            "cat /sys/bus/pci/devices/%s\:%s\:%s/numa_node"
+            % (domain_id, bus_id, devfun_id),
+            "# ",
+            alt_session=True,
+        )
+
+        try:
+            numa = int(numa)
+        except ValueError:
+            numa = -1
+            self.logger.warning("NUMA not available")
+        return numa
+
+    def get_ipv6_addr(self, intf):
+        """
+        Get ipv6 address of specified pci device.
+        """
+        get_ipv6_addr = getattr(self, "get_ipv6_addr_%s" % self.get_os_type())
+        return get_ipv6_addr(intf)
+
+    def get_ipv6_addr_linux(self, intf):
+        """
+        Get ipv6 address of specified pci device on linux.
+        """
+        out = self.send_expect(
+            "ip -family inet6 address show dev %s | awk '/inet6/ { print $2 }'" % intf,
+            "# ",
+            alt_session=True,
+        )
+        return out.split("/")[0]
+
+    def get_ipv6_addr_freebsd(self, intf):
+        """
+        Get ipv6 address of specified pci device on Freebsd.
+        """
+        out = self.send_expect("ifconfig %s" % intf, "# ", alt_session=True)
+        rexp = r"inet6 ([\da-f:]*)%"
+        pattern = re.compile(rexp)
+        match = pattern.findall(out)
+        if len(match) == 0:
+            return None
+
+        return match[0]
+
+    def disable_ipv6(self, intf):
+        """
+        Disable ipv6 of of specified interface
+        """
+        if intf != "N/A":
+            self.send_expect(
+                "sysctl net.ipv6.conf.%s.disable_ipv6=1" % intf, "# ", alt_session=True
+            )
+
+    def enable_ipv6(self, intf):
+        """
+        Enable ipv6 of of specified interface
+        """
+        if intf != "N/A":
+            self.send_expect(
+                "sysctl net.ipv6.conf.%s.disable_ipv6=0" % intf, "# ", alt_session=True
+            )
+
+            out = self.send_expect("ifconfig %s" % intf, "# ", alt_session=True)
+            if "inet6" not in out:
+                self.send_expect("ifconfig %s down" % intf, "# ", alt_session=True)
+                self.send_expect("ifconfig %s up" % intf, "# ", alt_session=True)
+
+    def create_file(self, contents, fileName):
+        """
+        Create file with contents and copy it to CRB.
+        """
+        with open(fileName, "w") as f:
+            f.write(contents)
+        self.session.copy_file_to(fileName, password=self.get_password())
+
+    def check_trex_process_existed(self):
+        """
+        if the tester and dut on same server
+        and pktgen is trex, do not kill the process
+        """
+        if (
+            "pktgen" in self.crb
+            and (self.crb["pktgen"] is not None)
+            and (self.crb["pktgen"].lower() == "trex")
+        ):
+            if self.crb["IP"] == self.crb["tester IP"] and self.trex_prefix is None:
+                conf_inst = PktgenConf("trex")
+                conf_info = conf_inst.load_pktgen_config()
+                if "config_file" in conf_info:
+                    config_file = conf_info["config_file"]
+                else:
+                    config_file = "/etc/trex_cfg.yaml"
+                fd = open(config_file, "r")
+                output = fd.read()
+                fd.close()
+                prefix = re.search("prefix\s*:\s*(\S*)", output)
+                if prefix is not None:
+                    self.trex_prefix = prefix.group(1)
+        return self.trex_prefix
+
+    def get_dpdk_pids(self, prefix_list, alt_session):
+        """
+        get all dpdk applications on CRB.
+        """
+        trex_prefix = self.check_trex_process_existed()
+        if trex_prefix is not None and trex_prefix in prefix_list:
+            prefix_list.remove(trex_prefix)
+        file_directorys = [
+            "/var/run/dpdk/%s/config" % file_prefix for file_prefix in prefix_list
+        ]
+        pids = []
+        pid_reg = r"p(\d+)"
+        for config_file in file_directorys:
+            # Covers case where the process is run as a unprivileged user and does not generate the file
+            isfile = self.send_expect(
+                "ls -l {}".format(config_file), "# ", 20, alt_session
+            )
+            if isfile:
+                cmd = "lsof -Fp %s" % config_file
+                out = self.send_expect(cmd, "# ", 20, alt_session)
+                if len(out):
+                    lines = out.split("\r\n")
+                    for line in lines:
+                        m = re.match(pid_reg, line)
+                        if m:
+                            pids.append(m.group(1))
+                for pid in pids:
+                    self.send_expect("kill -9 %s" % pid, "# ", 20, alt_session)
+                    self.get_session_output(timeout=2)
+
+        hugepage_info = [
+            "/var/run/dpdk/%s/hugepage_info" % file_prefix
+            for file_prefix in prefix_list
+        ]
+        for hugepage in hugepage_info:
+            # Covers case where the process is run as a unprivileged user and does not generate the file
+            isfile = self.send_expect(
+                "ls -l {}".format(hugepage), "# ", 20, alt_session
+            )
+            if isfile:
+                cmd = "lsof -Fp %s" % hugepage
+                out = self.send_expect(cmd, "# ", 20, alt_session)
+                if len(out) and "No such file or directory" not in out:
+                    self.logger.warning("There are some dpdk process not free hugepage")
+                    self.logger.warning("**************************************")
+                    self.logger.warning(out)
+                    self.logger.warning("**************************************")
+
+        # remove directory
+        directorys = ["/var/run/dpdk/%s" % file_prefix for file_prefix in prefix_list]
+        for directory in directorys:
+            cmd = "rm -rf %s" % directory
+            self.send_expect(cmd, "# ", 20, alt_session)
+
+        # delete hugepage on mnt path
+        if getattr(self, "hugepage_path", None):
+            for file_prefix in prefix_list:
+                cmd = "rm -rf %s/%s*" % (self.hugepage_path, file_prefix)
+                self.send_expect(cmd, "# ", 20, alt_session)
+
+    def kill_all(self, alt_session=True):
+        """
+        Kill all dpdk applications on CRB.
+        """
+        if "tester" in str(self):
+            self.logger.info("kill_all: called by tester")
+            pass
+        else:
+            if self.prefix_list:
+                self.logger.info("kill_all: called by dut and prefix list has value.")
+                self.get_dpdk_pids(self.prefix_list, alt_session)
+                # init prefix_list
+                self.prefix_list = []
+            else:
+                self.logger.info("kill_all: called by dut and has no prefix list.")
+                out = self.send_command(
+                    "ls -l /var/run/dpdk |awk '/^d/ {print $NF}'",
+                    timeout=0.5,
+                    alt_session=True,
+                )
+                # the last directory is expect string, eg: [PEXPECT]#
+                if out != "":
+                    dir_list = out.split("\r\n")
+                    self.get_dpdk_pids(dir_list[:-1], alt_session)
+
+    def close(self):
+        """
+        Close ssh session of CRB.
+        """
+        self.session.close()
+        self.alt_session.close()
+
+    def get_os_type(self):
+        """
+        Get OS type from execution configuration file.
+        """
+        from .dut import Dut
+
+        if isinstance(self, Dut) and "OS" in self.crb:
+            return str(self.crb["OS"]).lower()
+
+        return "linux"
+
+    def check_os_type(self):
+        """
+        Check real OS type whether match configured type.
+        """
+        from .dut import Dut
+
+        expected = "Linux.*#"
+        if isinstance(self, Dut) and self.get_os_type() == "freebsd":
+            expected = "FreeBSD.*#"
+
+        self.send_expect("uname", expected, 2, alt_session=True)
+
+    def init_core_list(self):
+        """
+        Load or create core information of CRB.
+        """
+        if self.read_cache:
+            self.number_of_cores = self.serializer.load(self.NUMBER_CORES_CACHE_KEY)
+            self.cores = self.serializer.load(self.CORE_LIST_CACHE_KEY)
+
+        if not self.read_cache or self.cores is None or self.number_of_cores is None:
+            self.init_core_list_uncached()
+            self.serializer.save(self.NUMBER_CORES_CACHE_KEY, self.number_of_cores)
+            self.serializer.save(self.CORE_LIST_CACHE_KEY, self.cores)
+
+    def init_core_list_uncached(self):
+        """
+        Scan cores on CRB and create core information list.
+        """
+        init_core_list_uncached = getattr(
+            self, "init_core_list_uncached_%s" % self.get_os_type()
+        )
+        init_core_list_uncached()
+
+    def init_core_list_uncached_freebsd(self):
+        """
+        Scan cores in Freebsd and create core information list.
+        """
+        self.cores = []
+
+        import xml.etree.ElementTree as ET
+
+        out = self.send_expect("sysctl -n kern.sched.topology_spec", "# ")
+
+        cpu_xml = ET.fromstring(out)
+
+        # WARNING: HARDCODED VALUES FOR CROWN PASS IVB
+        thread = 0
+        socket_id = 0
+
+        sockets = cpu_xml.findall(".//group[@level='2']")
+        for socket in sockets:
+            core_id = 0
+            core_elements = socket.findall(".//children/group/cpu")
+            for core in core_elements:
+                threads = [int(x) for x in core.text.split(",")]
+                for thread in threads:
+                    if thread != 0:
+                        self.cores.append(
+                            {"socket": socket_id, "core": core_id, "thread": thread}
+                        )
+                core_id += 1
+            socket_id += 1
+        self.number_of_cores = len(self.cores)
+
+    def init_core_list_uncached_linux(self):
+        """
+        Scan cores in linux and create core information list.
+        """
+        self.cores = []
+
+        cpuinfo = self.send_expect(
+            "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \#", "#", alt_session=True
+        )
+
+        # cpuinfo = cpuinfo.split()
+        cpuinfo = [i for i in cpuinfo.split() if re.match("^\d.+", i)]
+        # haswell cpu on cottonwood core id not correct
+        # need additional coremap for haswell cpu
+        core_id = 0
+        coremap = {}
+        for line in cpuinfo:
+            (thread, core, socket, node) = line.split(",")[0:4]
+
+            if core not in list(coremap.keys()):
+                coremap[core] = core_id
+                core_id += 1
+
+            if self.crb["bypass core0"] and core == "0" and socket == "0":
+                self.logger.info("Core0 bypassed")
+                continue
+            if (
+                self.crb.get("dut arch") == "arm64"
+                or self.crb.get("dut arch") == "ppc64"
+            ):
+                self.cores.append(
+                    {"thread": thread, "socket": node, "core": coremap[core]}
+                )
+            else:
+                self.cores.append(
+                    {"thread": thread, "socket": socket, "core": coremap[core]}
+                )
+
+        self.number_of_cores = len(self.cores)
+
+    def get_all_cores(self):
+        """
+        Return core information list.
+        """
+        return self.cores
+
+    def remove_hyper_core(self, core_list, key=None):
+        """
+        Remove hyperthread lcore for core list.
+        """
+        found = set()
+        for core in core_list:
+            val = core if key is None else key(core)
+            if val not in found:
+                yield core
+                found.add(val)
+
+    def init_reserved_core(self):
+        """
+        Remove hyperthread cores from reserved list.
+        """
+        partial_cores = self.cores
+        # remove hyper-threading core
+        self.reserved_cores = list(
+            self.remove_hyper_core(
+                partial_cores, key=lambda d: (d["core"], d["socket"])
+            )
+        )
+
+    def remove_reserved_cores(self, core_list, args):
+        """
+        Remove cores from reserved cores.
+        """
+        indexes = sorted(args, reverse=True)
+        for index in indexes:
+            del core_list[index]
+        return core_list
+
+    def get_reserved_core(self, config, socket):
+        """
+        Get reserved cores by core config and socket id.
+        """
+        m = re.match("([1-9]+)C", config)
+        nr_cores = int(m.group(1))
+        if m is None:
+            return []
+
+        partial_cores = [n for n in self.reserved_cores if int(n["socket"]) == socket]
+        if len(partial_cores) < nr_cores:
+            return []
+
+        thread_list = [self.reserved_cores[n]["thread"] for n in range(nr_cores)]
+
+        # remove used core from reserved_cores
+        rsv_list = [n for n in range(nr_cores)]
+        self.reserved_cores = self.remove_reserved_cores(partial_cores, rsv_list)
+
+        # return thread list
+        return list(map(str, thread_list))
+
+    def get_core_list(self, config, socket=-1, from_last=False):
+        """
+        Get lcore array according to the core config like "all", "1S/1C/1T".
+        We can specify the physical CPU socket by the "socket" parameter.
+        """
+        if config == "all":
+            cores = []
+            if socket != -1:
+                for core in self.cores:
+                    if int(core["socket"]) == socket:
+                        cores.append(core["thread"])
+            else:
+                cores = [core["thread"] for core in self.cores]
+            return cores
+
+        m = re.match("([1234])S/([0-9]+)C/([12])T", config)
+
+        if m:
+            nr_sockets = int(m.group(1))
+            nr_cores = int(m.group(2))
+            nr_threads = int(m.group(3))
+
+            partial_cores = self.cores
+
+            # If not specify socket sockList will be [0,1] in numa system
+            # If specify socket will just use the socket
+            if socket < 0:
+                sockList = set([int(core["socket"]) for core in partial_cores])
+            else:
+                for n in partial_cores:
+                    if int(n["socket"]) == socket:
+                        sockList = [int(n["socket"])]
+
+            if from_last:
+                sockList = list(sockList)[-nr_sockets:]
+            else:
+                sockList = list(sockList)[:nr_sockets]
+            partial_cores = [n for n in partial_cores if int(n["socket"]) in sockList]
+            core_list = set([int(n["core"]) for n in partial_cores])
+            core_list = list(core_list)
+            thread_list = set([int(n["thread"]) for n in partial_cores])
+            thread_list = list(thread_list)
+
+            # filter usable core to core_list
+            temp = []
+            for sock in sockList:
+                core_list = set(
+                    [int(n["core"]) for n in partial_cores if int(n["socket"]) == sock]
+                )
+                if from_last:
+                    core_list = list(core_list)[-nr_cores:]
+                else:
+                    core_list = list(core_list)[:nr_cores]
+                temp.extend(core_list)
+
+            core_list = temp
+
+            # if system core less than request just use all cores in in socket
+            if len(core_list) < (nr_cores * nr_sockets):
+                partial_cores = self.cores
+                sockList = set([int(n["socket"]) for n in partial_cores])
+
+                if from_last:
+                    sockList = list(sockList)[-nr_sockets:]
+                else:
+                    sockList = list(sockList)[:nr_sockets]
+                partial_cores = [
+                    n for n in partial_cores if int(n["socket"]) in sockList
+                ]
+
+                temp = []
+                for sock in sockList:
+                    core_list = list(
+                        [
+                            int(n["thread"])
+                            for n in partial_cores
+                            if int(n["socket"]) == sock
+                        ]
+                    )
+                    if from_last:
+                        core_list = core_list[-nr_cores:]
+                    else:
+                        core_list = core_list[:nr_cores]
+                    temp.extend(core_list)
+
+                core_list = temp
+
+            partial_cores = [n for n in partial_cores if int(n["core"]) in core_list]
+            temp = []
+            if len(core_list) < nr_cores:
+                raise ValueError(
+                    "Cannot get requested core configuration "
+                    "requested {} have {}".format(config, self.cores)
+                )
+            if len(sockList) < nr_sockets:
+                raise ValueError(
+                    "Cannot get requested core configuration "
+                    "requested {} have {}".format(config, self.cores)
+                )
+            # recheck the core_list and create the thread_list
+            i = 0
+            for sock in sockList:
+                coreList_aux = [
+                    int(core_list[n])
+                    for n in range((nr_cores * i), (nr_cores * i + nr_cores))
+                ]
+                for core in coreList_aux:
+                    thread_list = list(
+                        [
+                            int(n["thread"])
+                            for n in partial_cores
+                            if ((int(n["core"]) == core) and (int(n["socket"]) == sock))
+                        ]
+                    )
+                    if from_last:
+                        thread_list = thread_list[-nr_threads:]
+                    else:
+                        thread_list = thread_list[:nr_threads]
+                    temp.extend(thread_list)
+                    thread_list = temp
+                i += 1
+            return list(map(str, thread_list))
+
+    def get_lcore_id(self, config, inverse=False):
+        """
+        Get lcore id of specified core by config "C{socket.core.thread}"
+        """
+
+        m = re.match("C{([01]).(\d+).([01])}", config)
+
+        if m:
+            sockid = m.group(1)
+            coreid = int(m.group(2))
+            if inverse:
+                coreid += 1
+                coreid = -coreid
+            threadid = int(m.group(3))
+            if inverse:
+                threadid += 1
+                threadid = -threadid
+
+            perSocklCs = [_ for _ in self.cores if _["socket"] == sockid]
+            coreNum = perSocklCs[coreid]["core"]
+
+            perCorelCs = [_ for _ in perSocklCs if _["core"] == coreNum]
+
+            return perCorelCs[threadid]["thread"]
+
+    def get_port_info(self, pci):
+        """
+        return port info by pci id
+        """
+        for port_info in self.ports_info:
+            if port_info["pci"] == pci:
+                return port_info
+
+    def get_port_pci(self, port_id):
+        """
+        return port pci address by port index
+        """
+        return self.ports_info[port_id]["pci"]
+
+    def enable_promisc(self, intf):
+        if intf != "N/A":
+            self.send_expect("ifconfig %s promisc" % intf, "# ", alt_session=True)
+
+    def get_priv_flags_state(self, intf, flag, timeout=TIMEOUT):
+        """
+
+        :param intf: nic name
+        :param flag: priv-flags flag
+        :return: flag state
+        """
+        check_flag = "ethtool --show-priv-flags %s" % intf
+        out = self.send_expect(check_flag, "# ", timeout)
+        p = re.compile("%s\s*:\s+(\w+)" % flag)
+        state = re.search(p, out)
+        if state:
+            return state.group(1)
+        else:
+            self.logger.info("NIC %s may be not find %s" % (intf, flag))
+            return False
+
+    def is_interface_up(self, intf, timeout=15):
+        """
+        check and wait port link status up until timeout
+        """
+        for i in range(timeout):
+            link_status = self.get_interface_link_status(intf)
+            if link_status == "Up":
+                return True
+            time.sleep(1)
+        self.logger.error(f"check and wait {intf} link up timeout")
+        return False
+
+    def is_interface_down(self, intf, timeout=15):
+        """
+        check and wait port link status down until timeout
+        """
+        for i in range(timeout):
+            link_status = self.get_interface_link_status(intf)
+            if link_status == "Down":
+                return True
+            time.sleep(1)
+        self.logger.error(f"check and wait {intf} link down timeout")
+        return False
+
+    def get_interface_link_status(self, intf):
+        out = self.send_expect(f"ethtool {intf}", "#")
+        link_status_matcher = r"Link detected: (\w+)"
+        link_status = re.search(link_status_matcher, out).groups()[0]
+        return "Up" if link_status == "yes" else "Down"
-- 
2.20.1



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 02/18] dts: merge DTS framework/dut.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 01/18] dts: merge DTS framework/crb.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 03/18] dts: merge DTS framework/ixia_buffer_parser.py " Juraj Linkeš
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/dut.py | 1727 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1727 insertions(+)
 create mode 100644 dts/framework/dut.py

diff --git a/dts/framework/dut.py b/dts/framework/dut.py
new file mode 100644
index 0000000000..a2a9373448
--- /dev/null
+++ b/dts/framework/dut.py
@@ -0,0 +1,1727 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import os
+import re
+import threading
+import time
+from typing import Dict, List, Optional, Union
+from uuid import uuid4
+
+import framework.settings as settings
+from nics.net_device import GetNicObj
+
+from .config import AppNameConf, PortConf
+from .crb import Crb
+from .exception import ParameterInvalidException
+from .settings import LOG_NAME_SEP, NICS
+from .ssh_connection import SSHConnection
+from .test_result import ResultTable
+from .utils import RED, remove_old_rsa_key
+from .virt_resource import VirtResource
+
+
+class Dut(Crb):
+
+    """
+    A connection to the CRB under test.
+    This class sends commands to the CRB and validates the responses. It is
+    implemented using either ssh for linuxapp or the terminal server for
+    baremetal.
+    All operations are in fact delegated to an instance of either CRBLinuxApp
+    or CRBBareMetal.
+    """
+
+    PORT_MAP_CACHE_KEY = "dut_port_map"
+    PORT_INFO_CACHE_KEY = "dut_port_info"
+    NUMBER_CORES_CACHE_KEY = "dut_number_cores"
+    CORE_LIST_CACHE_KEY = "dut_core_list"
+    PCI_DEV_CACHE_KEY = "dut_pci_dev_info"
+
+    def __init__(self, crb, serializer, dut_id=0, name=None, alt_session=True):
+        if not name:
+            name = "dut" + LOG_NAME_SEP + "%s" % crb["My IP"]
+            self.NAME = name
+        super(Dut, self).__init__(crb, serializer, dut_id, name, alt_session)
+        self.host_init_flag = False
+        self.number_of_cores = 0
+        self.tester = None
+        self.cores = []
+        self.architecture = None
+        self.conf = PortConf()
+        self.ports_map = []
+        self.virt_pool = None
+        # hypervisor pid list, used for cleanup
+        self.virt_pids = []
+        self.prefix_subfix = (
+            str(os.getpid()) + "_" + time.strftime("%Y%m%d%H%M%S", time.localtime())
+        )
+        self.hugepage_path = None
+        self.apps_name_conf = {}
+        self.apps_name = {}
+        self.dpdk_version = ""
+        self.nic = None
+
+    def filter_cores_from_crb_cfg(self):
+        # get core list from crbs.cfg
+        core_list = []
+        all_core_list = [str(core["core"]) for core in self.cores]
+        core_list_str = self.crb["dut_cores"]
+        if core_list_str == "":
+            core_list = all_core_list
+        split_by_comma = core_list_str.split(",")
+        range_cores = []
+        for item in split_by_comma:
+            if "-" in item:
+                tmp = item.split("-")
+                range_cores.extend(
+                    [str(i) for i in range(int(tmp[0]), int(tmp[1]) + 1)]
+                )
+            else:
+                core_list.append(item)
+        core_list.extend(range_cores)
+
+        abnormal_core_list = []
+        for core in core_list:
+            if core not in all_core_list:
+                abnormal_core_list.append(core)
+
+        if abnormal_core_list:
+            self.logger.info(
+                "those %s cores are out of range system, all core list of system are %s"
+                % (abnormal_core_list, all_core_list)
+            )
+            raise Exception("configured cores out of range system")
+
+        core_list = [core for core in self.cores if str(core["core"]) in core_list]
+        self.cores = core_list
+        self.number_of_cores = len(self.cores)
+
+    def create_eal_parameters(
+        self,
+        fixed_prefix: bool = False,
+        socket: Optional[int] = None,
+        cores: Union[str, List[int], List[str]] = "default",
+        ports: Union[List[str], List[int]] = None,
+        port_options: Dict[Union[str, int], str] = None,
+        prefix: str = "",
+        no_pci: bool = False,
+        b_ports: Union[List[str], List[int]] = None,
+        vdevs: List[str] = None,
+        other_eal_param: str = "",
+    ) -> str:
+        """
+        generate eal parameters character string;
+        :param fixed_prefix: use fixed file-prefix or not, when it is true,
+                             the file-prefix will not be added a timestamp
+        :param socket: the physical CPU socket index, -1 means no care cpu socket;
+        :param cores: set the core info, eg:
+                        cores=[0,1,2,3],
+                        cores=['0', '1', '2', '3'],
+                        cores='default',
+                        cores='1S/4C/1T',
+                        cores='all';
+        :param ports: set PCI allow list, eg:
+                        ports=['0000:1a:00.0', '0000:1a:00.1'],
+                        ports=[0, 1];
+        :param port_options: set options of port, eg:
+                        port_options={'0000:1a:00.0': "proto_xtr=vlan"},
+                        port_options={0: "cap=dcf"};
+        :param prefix: set file prefix string, eg:
+                        prefix='vf';
+        :param no_pci: switch of disable PCI bus eg:
+                        no_pci=True;
+        :param b_ports: skip probing a PCI device to prevent EAL from using it, eg:
+                        b_ports=['0000:1a:00.0'],
+                        b_ports=[0];
+        :param vdevs: virtual device list, eg:
+                        vdevs=['net_ring0', 'net_ring1'];
+        :param other_eal_param: user defined DPDK eal parameters, eg:
+                        other_eal_param='--single-file-segments';
+        :return: eal param string, eg:
+                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+        if DPDK version < 20.11-rc4, eal_str eg:
+                '-c 0xf -w 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+        """
+        if ports is None:
+            ports = []
+
+        if port_options is None:
+            port_options = {}
+
+        if b_ports is None:
+            b_ports = []
+
+        if vdevs is None:
+            vdevs = []
+
+        if socket is None:
+            socket = -1
+
+        config = {
+            "cores": cores,
+            "ports": ports,
+            "port_options": port_options,
+            "prefix": prefix,
+            "no_pci": no_pci,
+            "b_ports": b_ports,
+            "vdevs": vdevs,
+            "other_eal_param": other_eal_param,
+        }
+
+        eal_parameter_creator = _EalParameter(
+            dut=self, fixed_prefix=fixed_prefix, socket=socket, **config
+        )
+        eal_str = eal_parameter_creator.make_eal_param()
+
+        return eal_str
+
+    def get_eal_of_prefix(self, prefix=None):
+
+        if prefix:
+            file_prefix = [
+                prefix_name for prefix_name in self.prefix_list if prefix in prefix_name
+            ]
+        else:
+            file_prefix = "dpdk" + "_" + self.prefix_subfix
+
+        return file_prefix
+
+    def init_host_session(self, vm_name):
+        """
+        Create session for each VM, session will be handled by VM instance
+        """
+        self.host_session = SSHConnection(
+            self.get_ip_address(),
+            vm_name + "_host",
+            self.get_username(),
+            self.get_password(),
+        )
+        self.host_session.init_log(self.logger)
+        self.logger.info(
+            "[%s] create new session for VM" % (threading.current_thread().name)
+        )
+
+    def new_session(self, suite=""):
+        """
+        Create new session for dut instance. Session name will be unique.
+        """
+        if len(suite):
+            session_name = self.NAME + "_" + suite
+        else:
+            session_name = self.NAME + "_" + str(uuid4())
+        session = self.create_session(name=session_name)
+        if suite != "":
+            session.logger.config_suite(suite, self.NAME)
+        else:
+            session.logger.config_execution(self.NAME)
+
+        if getattr(self, "base_dir", None):
+            session.send_expect("cd %s" % self.base_dir, "# ")
+
+        return session
+
+    def close_session(self, session):
+        """
+        close new session in dut instance
+        """
+        self.destroy_session(session)
+
+    def change_config_option(self, target, parameter, value):
+        """
+        This function change option in the config file
+        """
+        self.send_expect(
+            "sed -i 's/%s=.*$/%s=%s/'  config/defconfig_%s"
+            % (parameter, parameter, value, target),
+            "# ",
+        )
+
+    def set_nic_type(self, nic_type):
+        """
+        Set CRB NICS ready to validated.
+        """
+        self.nic_type = nic_type
+        if "cfg" in nic_type:
+            self.conf.load_ports_config(self.get_ip_address())
+
+    def set_toolchain(self, target):
+        """
+        This looks at the current target and instantiates an attribute to
+        be either a CRBLinuxApp or CRBBareMetal object. These latter two
+        classes are private and should not be used directly by client code.
+        """
+        self.kill_all()
+        self.target = target
+        [arch, _, _, toolchain] = target.split("-")
+
+        if toolchain == "icc":
+            icc_vars = os.getenv("ICC_VARS", "/opt/intel/composer_xe_2013/bin/")
+            icc_vars += "compilervars.sh"
+
+            if arch == "x86_64":
+                icc_arch = "intel64"
+            elif arch == "i686":
+                icc_arch = "ia32"
+            self.send_expect("source " + icc_vars + " " + icc_arch, "# ")
+
+        self.architecture = arch
+
+    def mount_procfs(self):
+        """
+        Mount proc file system.
+        """
+        mount_procfs = getattr(self, "mount_procfs_%s" % self.get_os_type())
+        mount_procfs()
+
+    def mount_procfs_linux(self):
+        pass
+
+    def mount_procfs_freebsd(self):
+        """
+        Mount proc file system in Freebsd.
+        """
+        self.send_expect("mount -t procfs proc /proc", "# ")
+
+    def get_ip_address(self):
+        """
+        Get DUT's ip address.
+        """
+        return self.crb["IP"]
+
+    def get_password(self):
+        """
+        Get DUT's login password.
+        """
+        return self.crb["pass"]
+
+    def get_username(self):
+        """
+        Get DUT's login username.
+        """
+        return self.crb["user"]
+
+    def dut_prerequisites(self):
+        """
+        Prerequest function should be called before execute any test case.
+        Will call function to scan all lcore's information which on DUT.
+        Then call pci scan function to collect nic device information.
+        At last setup DUT' environment for validation.
+        """
+        out = self.send_expect("cd %s" % self.base_dir, "# ")
+        assert "No such file or directory" not in out, "Can't switch to dpdk folder!!!"
+        out = self.send_expect("cat VERSION", "# ")
+        if "No such file or directory" in out:
+            self.logger.error("Can't get DPDK version due to VERSION not exist!!!")
+        else:
+            self.dpdk_version = out
+        self.send_expect("alias ls='ls --color=none'", "#")
+
+        if self.get_os_type() == "freebsd":
+            self.send_expect("alias make=gmake", "# ")
+            self.send_expect("alias sed=gsed", "# ")
+
+        self.init_core_list()
+        self.filter_cores_from_crb_cfg()
+        self.pci_devices_information()
+        # make sure ipv6 enable before scan
+        self.enable_tester_ipv6()
+        # scan ports before restore interface
+        self.scan_ports()
+        # restore dut ports to kernel
+        self.restore_interfaces()
+        # rescan ports after interface up
+        self.rescan_ports()
+        # load port information from config file
+        self.load_portconf()
+        self.mount_procfs()
+        # auto detect network topology
+        self.map_available_ports()
+        # disable tester port ipv6
+        self.disable_tester_ipv6()
+        self.get_nic_configurations()
+
+        # print latest ports_info
+        for port_info in self.ports_info:
+            self.logger.info(port_info)
+
+        if self.ports_map is None or len(self.ports_map) == 0:
+            self.logger.warning("ports_map should not be empty, please check all links")
+
+        # initialize virtualization resource pool
+        self.virt_pool = VirtResource(self)
+
+        # load app name conf
+        name_cfg = AppNameConf()
+        self.apps_name_conf = name_cfg.load_app_name_conf()
+
+    def get_nic_configurations(self):
+        retry_times = 3
+        if self.ports_info:
+            self.nic = self.ports_info[0]["port"]
+            self.nic.get_driver_firmware()
+            if self.nic.default_driver == "ice":
+                self.get_nic_pkg(retry_times)
+
+    def get_nic_pkg(self, retry_times=3):
+        self.nic.pkg = self.nic.get_nic_pkg()
+        while not self.nic.pkg.get("type") and retry_times > 0:
+            self.restore_interfaces()
+            self.nic.pkg = self.nic.get_nic_pkg()
+            retry_times = retry_times - 1
+        self.logger.info("pkg: {}".format(self.nic.pkg))
+        if not self.nic.pkg:
+            raise Exception("Get nic pkg failed")
+
+    def restore_interfaces(self):
+        """
+        Restore all ports's interfaces.
+        """
+        # no need to restore for all info has been recorded
+        if self.read_cache:
+            return
+
+        restore_interfaces = getattr(self, "restore_interfaces_%s" % self.get_os_type())
+        return restore_interfaces()
+
+    def restore_interfaces_freebsd(self):
+        """
+        Restore FreeBSD interfaces.
+        """
+        self.send_expect("kldunload nic_uio.ko", "#")
+        self.send_expect("kldunload contigmem.ko", "#")
+        self.send_expect("kldload if_ixgbe.ko", "#", 20)
+
+    def stop_ports(self):
+        """
+        After all execution done, the nic should be stop
+        """
+        for (pci_bus, pci_id) in self.pci_devices_info:
+            driver = settings.get_nic_driver(pci_id)
+            if driver is not None:
+                # unbind device driver
+                addr_array = pci_bus.split(":")
+                domain_id = addr_array[0]
+                bus_id = addr_array[1]
+                devfun_id = addr_array[2]
+                port = GetNicObj(self, domain_id, bus_id, devfun_id)
+                port.stop()
+
+    def restore_interfaces_linux(self):
+        """
+        Restore Linux interfaces.
+        """
+        for port in self.ports_info:
+            pci_bus = port["pci"]
+            pci_id = port["type"]
+            # get device driver
+            driver = settings.get_nic_driver(pci_id)
+            if driver is not None:
+                # unbind device driver
+                addr_array = pci_bus.split(":")
+                domain_id = addr_array[0]
+                bus_id = addr_array[1]
+                devfun_id = addr_array[2]
+
+                port = GetNicObj(self, domain_id, bus_id, devfun_id)
+
+                self.send_expect(
+                    "echo %s > /sys/bus/pci/devices/%s\:%s\:%s/driver/unbind"
+                    % (pci_bus, domain_id, bus_id, devfun_id),
+                    "# ",
+                    timeout=30,
+                )
+                # bind to linux kernel driver
+                self.send_expect("modprobe %s" % driver, "# ")
+                self.send_expect(
+                    "echo %s > /sys/bus/pci/drivers/%s/bind" % (pci_bus, driver), "# "
+                )
+                pull_retries = 5
+                itf = "N/A"
+                while pull_retries > 0:
+                    itf = port.get_interface_name()
+                    if not itf or itf == "N/A":
+                        time.sleep(1)
+                        pull_retries -= 1
+                    else:
+                        break
+                else:
+                    # try to bind nic with iavf
+                    if driver == "iavf":
+                        self.send_expect("modprobe %s" % driver, "# ")
+                        self.send_expect(
+                            "echo %s > /sys/bus/pci/drivers/%s/bind"
+                            % (pci_bus, driver),
+                            "# ",
+                        )
+                        pull_retries = 5
+                        itf = "N/A"
+                        while pull_retries > 0:
+                            itf = port.get_interface_name()
+                            if not itf or itf == "N/A":
+                                time.sleep(1)
+                                pull_retries -= 1
+                            else:
+                                break
+                if itf == "N/A":
+                    self.logger.warning("Fail to bind the device with the linux driver")
+                else:
+                    self.send_expect("ifconfig %s up" % itf, "# ")
+            else:
+                self.logger.info(
+                    "NOT FOUND DRIVER FOR PORT (%s|%s)!!!" % (pci_bus, pci_id)
+                )
+
+    def setup_memory(self, hugepages=-1):
+        """
+        Setup hugepage on DUT.
+        """
+        try:
+            function_name = "setup_memory_%s" % self.get_os_type()
+            setup_memory = getattr(self, function_name)
+            setup_memory(hugepages)
+        except AttributeError:
+            self.logger.error("%s is not implemented" % function_name)
+
+    def get_def_rte_config(self, config):
+        """
+        Get RTE configuration from config/defconfig_*.
+        """
+        out = self.send_expect(
+            "cat config/defconfig_%s | sed '/^#/d' | sed '/^\s*$/d'" % self.target, "# "
+        )
+
+        def_rte_config = re.findall(config + "=(\S+)", out)
+        if def_rte_config:
+            return def_rte_config[0]
+        else:
+            return None
+
+    def setup_memory_linux(self, hugepages=-1):
+        """
+        Setup Linux hugepages.
+        """
+        if self.virttype == "XEN":
+            return
+        hugepages_size = self.send_expect(
+            "awk '/Hugepagesize/ {print $2}' /proc/meminfo", "# "
+        )
+        total_huge_pages = self.get_total_huge_pages()
+        numa_nodes = self.send_expect("ls /sys/devices/system/node | grep node*", "# ")
+        if not numa_nodes:
+            total_numa_nodes = -1
+        else:
+            numa_nodes = numa_nodes.splitlines()
+            total_numa_nodes = len(numa_nodes)
+            self.logger.info(numa_nodes)
+
+        force_socket = False
+
+        if int(hugepages_size) < (1024 * 1024):
+            if self.architecture == "x86_64":
+                arch_huge_pages = hugepages if hugepages > 0 else 4096
+            elif self.architecture == "i686":
+                arch_huge_pages = hugepages if hugepages > 0 else 512
+                force_socket = True
+            # set huge pagesize for x86_x32 abi target
+            elif self.architecture == "x86_x32":
+                arch_huge_pages = hugepages if hugepages > 0 else 256
+                force_socket = True
+            elif self.architecture == "ppc_64":
+                arch_huge_pages = hugepages if hugepages > 0 else 512
+            elif self.architecture == "arm64":
+                if int(hugepages_size) >= (512 * 1024):
+                    arch_huge_pages = hugepages if hugepages > 0 else 8
+                else:
+                    arch_huge_pages = hugepages if hugepages > 0 else 2048
+
+            if total_huge_pages != arch_huge_pages:
+                if total_numa_nodes == -1:
+                    self.set_huge_pages(arch_huge_pages)
+                else:
+                    # before all hugepage average distribution  by all socket,
+                    # but sometimes create mbuf pool on socket 0 failed when
+                    # setup testpmd, so set all huge page on first socket
+                    if force_socket:
+                        self.set_huge_pages(arch_huge_pages, numa_nodes[0])
+                        self.logger.info("force_socket on %s" % numa_nodes[0])
+                    else:
+                        numa_service_num = self.get_def_rte_config(
+                            "CONFIG_RTE_MAX_NUMA_NODES"
+                        )
+                        if numa_service_num is not None:
+                            total_numa_nodes = min(
+                                total_numa_nodes, int(numa_service_num)
+                            )
+
+                        # set huge pages to configured total_numa_nodes
+                        for numa_node in numa_nodes[:total_numa_nodes]:
+                            self.set_huge_pages(arch_huge_pages, numa_node)
+
+        self.mount_huge_pages()
+        self.hugepage_path = self.strip_hugepage_path()
+
+    def setup_memory_freebsd(self, hugepages=-1):
+        """
+        Setup Freebsd hugepages.
+        """
+        if hugepages == -1:
+            hugepages = 4096
+
+        num_buffers = hugepages / 1024
+        if num_buffers:
+            self.send_expect("kenv hw.contigmem.num_buffers=%d" % num_buffers, "#")
+
+        self.send_expect("kldunload contigmem.ko", "#")
+        self.send_expect("kldload ./%s/kmod/contigmem.ko" % self.target, "#")
+
+    def taskset(self, core):
+        if self.get_os_type() != "linux":
+            return ""
+
+        return "taskset %s " % core
+
+    def is_ssh_session_port(self, pci_bus):
+        """
+        Check if the pci device is the dut SSH session port.
+        """
+        port = None
+        for port_info in self.ports_info:
+            if pci_bus == port_info["pci"]:
+                port = port_info["port"]
+                break
+        if port and port.get_ipv4_addr() == self.get_ip_address().strip():
+            return True
+        else:
+            return False
+
+    def get_dpdk_bind_script(self):
+        op = self.send_expect("ls", "#")
+        if "usertools" in op:
+            res = "usertools/dpdk-devbind.py"
+        else:
+            op = self.send_expect("ls tools", "#")
+            if "dpdk_nic_bind.py" in op:
+                res = "tools/dpdk_nic_bind.py"
+            else:
+                res = "tools/dpdk-devbind.py"
+        return res
+
+    def bind_interfaces_linux(self, driver="igb_uio", nics_to_bind=None):
+        """
+        Bind the interfaces to the selected driver. nics_to_bind can be None
+        to bind all interfaces or an array with the port indexes
+        """
+
+        binding_list = "--bind=%s " % driver
+
+        current_nic = 0
+        for (pci_bus, pci_id) in self.pci_devices_info:
+            if settings.accepted_nic(pci_id):
+                if self.is_ssh_session_port(pci_bus):
+                    continue
+
+                if nics_to_bind is None or current_nic in nics_to_bind:
+                    binding_list += "%s " % (pci_bus)
+
+                current_nic += 1
+        if current_nic == 0:
+            self.logger.info("Not nic need bind driver: %s" % driver)
+            return
+        bind_script_path = self.get_dpdk_bind_script()
+        self.send_expect("%s --force %s" % (bind_script_path, binding_list), "# ")
+
+    def unbind_interfaces_linux(self, nics_to_bind=None):
+        """
+        Unbind the interfaces.
+        """
+
+        binding_list = "-u "
+
+        current_nic = 0
+        for (pci_bus, pci_id) in self.pci_devices_info:
+            if settings.accepted_nic(pci_id):
+                if self.is_ssh_session_port(pci_bus):
+                    continue
+
+                if nics_to_bind is None or current_nic in nics_to_bind:
+                    binding_list += "%s " % (pci_bus)
+
+                current_nic += 1
+
+        if current_nic == 0:
+            self.logger.info("Not nic need unbind driver")
+            return
+
+        bind_script_path = self.get_dpdk_bind_script()
+        self.send_expect("%s --force %s" % (bind_script_path, binding_list), "# ")
+
+    def bind_eventdev_port(self, driver="vfio-pci", port_to_bind=None):
+        """
+        Bind the eventdev interfaces to the selected driver. port_to_bind set to default, can be
+        changed at run time
+        """
+
+        binding_list = "--bind=%s %s" % (driver, port_to_bind)
+        bind_script_path = self.get_dpdk_bind_script()
+        self.send_expect("%s --force %s" % (bind_script_path, binding_list), "# ")
+
+    def set_eventdev_port_limits(self, device_id, port):
+        """
+        Setting the eventdev port SSO and SS0W limits.
+        """
+
+        bind_script_path = self.get_dpdk_bind_script()
+        eventdev_ports = self.send_expect(
+            '%s -s |grep -e %s | cut -d " " -f1' % (bind_script_path, device_id), "#"
+        )
+        eventdev_ports = eventdev_ports.split("\r\n")
+        for eventdev_port in eventdev_ports:
+            self.send_expect(
+                "echo 0 >  /sys/bus/pci/devices/%s/limits/sso" % (eventdev_port), "#"
+            )
+            self.send_expect(
+                "echo 0 >  /sys/bus/pci/devices/%s/limits/ssow" % (eventdev_port), "#"
+            )
+        for eventdev_port in eventdev_ports:
+            if eventdev_port == port:
+                self.send_expect(
+                    "echo 1 >  /sys/bus/pci/devices/%s/limits/tim" % (eventdev_port),
+                    "#",
+                )
+                self.send_expect(
+                    "echo 1 >  /sys/bus/pci/devices/%s/limits/npa" % (eventdev_port),
+                    "#",
+                )
+                self.send_expect(
+                    "echo 10 >  /sys/bus/pci/devices/%s/limits/sso" % (eventdev_port),
+                    "#",
+                )
+                self.send_expect(
+                    "echo 32 >  /sys/bus/pci/devices/%s/limits/ssow" % (eventdev_port),
+                    "#",
+                )
+
+    def unbind_eventdev_port(self, port_to_unbind=None):
+        """
+        Unbind the eventdev interfaces to the selected driver. port_to_unbind set to None, can be
+        changed at run time
+        """
+
+        binding_list = "-u  %s" % (port_to_unbind)
+        bind_script_path = self.get_dpdk_bind_script()
+        self.send_expect("%s  %s" % (bind_script_path, binding_list), "# ")
+
+    def get_ports(self, nic_type="any", perf=None, socket=None):
+        """
+        Return DUT port list with the filter of NIC type, whether run IXIA
+        performance test, whether request specified socket.
+        """
+        ports = []
+        candidates = []
+
+        nictypes = []
+        if nic_type == "any":
+            for portid in range(len(self.ports_info)):
+                ports.append(portid)
+            return ports
+        elif nic_type == "cfg":
+            for portid in range(len(self.ports_info)):
+                if self.ports_info[portid]["source"] == "cfg":
+                    if (
+                        socket is None
+                        or self.ports_info[portid]["numa"] == -1
+                        or socket == self.ports_info[portid]["numa"]
+                    ):
+                        ports.append(portid)
+            return ports
+        else:
+            for portid in range(len(self.ports_info)):
+                port_info = self.ports_info[portid]
+                # match nic type
+                if port_info["type"] == NICS[nic_type]:
+                    # match numa or none numa awareness
+                    if (
+                        socket is None
+                        or port_info["numa"] == -1
+                        or socket == port_info["numa"]
+                    ):
+                        # port has link,
+                        if self.tester.get_local_port(portid) != -1:
+                            ports.append(portid)
+            return ports
+
+    def get_ports_performance(
+        self,
+        nic_type="any",
+        perf=None,
+        socket=None,
+        force_same_socket=True,
+        force_different_nic=True,
+    ):
+        """
+        Return the maximum available number of ports meeting the parameters.
+        Focuses on getting ports with same/different NUMA node and/or
+        same/different NIC.
+        """
+
+        available_ports = self.get_ports(nic_type, perf, socket)
+        accepted_sets = []
+
+        while len(available_ports) > 0:
+            accepted_ports = []
+            # first available port is the reference port
+            accepted_ports.append(available_ports[0])
+
+            # check from second port according to parameter
+            for port in available_ports[1:]:
+
+                if force_same_socket and socket is None:
+                    if (
+                        self.ports_info[port]["numa"]
+                        != self.ports_info[accepted_ports[0]]["numa"]
+                    ):
+                        continue
+                if force_different_nic:
+                    if (
+                        self.ports_info[port]["pci"][:-1]
+                        == self.ports_info[accepted_ports[0]]["pci"][:-1]
+                    ):
+                        continue
+
+                accepted_ports.append(port)
+
+            for port in accepted_ports:
+                available_ports.remove(port)
+
+            accepted_sets.append(accepted_ports)
+
+        biggest_set = max(accepted_sets, key=lambda s: len(s))
+
+        return biggest_set
+
+    def get_peer_pci(self, port_num):
+        """
+        return the peer pci address of dut port
+        """
+        if "peer" not in self.ports_info[port_num]:
+            return None
+        else:
+            return self.ports_info[port_num]["peer"]
+
+    def get_mac_address(self, port_num):
+        """
+        return the port mac on dut
+        """
+        return self.ports_info[port_num]["mac"]
+
+    def get_ipv6_address(self, port_num):
+        """
+        return the IPv6 address on dut
+        """
+        return self.ports_info[port_num]["ipv6"]
+
+    def get_numa_id(self, port_num):
+        """
+        return the Numa Id of port
+        """
+        if self.ports_info[port_num]["numa"] == -1:
+            self.logger.warning("NUMA not supported")
+
+        return self.ports_info[port_num]["numa"]
+
+    def lcore_table_print(self, horizontal=False):
+        if not horizontal:
+            result_table = ResultTable(["Socket", "Core", "Thread"])
+
+            for lcore in self.cores:
+                result_table.add_row([lcore["socket"], lcore["core"], lcore["thread"]])
+            result_table.table_print()
+        else:
+            result_table = ResultTable(["X"] + [""] * len(self.cores))
+            result_table.add_row(["Thread"] + [n["thread"] for n in self.cores])
+            result_table.add_row(["Core"] + [n["core"] for n in self.cores])
+            result_table.add_row(["Socket"] + [n["socket"] for n in self.cores])
+            result_table.table_print()
+
+    def get_memory_channels(self):
+        n = self.crb["memory channels"]
+        if n is not None and n > 0:
+            return n
+        else:
+            return 1
+
+    def check_ports_available(self, pci_bus, pci_id):
+        """
+        Check that whether auto scanned ports ready to use
+        """
+        pci_addr = "%s:%s" % (pci_bus, pci_id)
+        if self.nic_type == "any":
+            return True
+        elif self.nic_type == "cfg":
+            if self.conf.check_port_available(pci_bus) is True:
+                return True
+        elif self.nic_type not in list(NICS.keys()):
+            self.logger.warning("NOT SUPPORTED NIC TYPE: %s" % self.nic_type)
+        else:
+            codename = NICS[self.nic_type]
+            if pci_id == codename:
+                return True
+
+        return False
+
+    def rescan_ports(self):
+        """
+        Rescan ports information
+        """
+        if self.read_cache:
+            return
+
+        if self.ports_info:
+            self.rescan_ports_uncached()
+            self.save_serializer_ports()
+
+    def rescan_ports_uncached(self):
+        """
+        rescan ports and update port's mac address, intf, ipv6 address.
+        """
+        rescan_ports_uncached = getattr(
+            self, "rescan_ports_uncached_%s" % self.get_os_type()
+        )
+        return rescan_ports_uncached()
+
+    def rescan_ports_uncached_linux(self):
+        unknow_interface = RED("Skipped: unknow_interface")
+
+        for port_info in self.ports_info:
+            port = port_info["port"]
+            intf = port.get_interface_name()
+            port_info["intf"] = intf
+            out = self.send_expect("ip link show %s" % intf, "# ")
+            if "DOWN" in out:
+                self.send_expect("ip link set %s up" % intf, "# ")
+                time.sleep(5)
+            port_info["mac"] = port.get_mac_addr()
+            out = self.send_expect(
+                "ip -family inet6 address show dev %s | awk '/inet6/ { print $2 }'"
+                % intf,
+                "# ",
+            )
+            ipv6 = out.split("/")[0]
+            # Unconnected ports don't have IPv6
+            if ":" not in ipv6:
+                ipv6 = "Not connected"
+
+            out = self.send_expect(
+                "ip -family inet address show dev %s | awk '/inet/ { print $2 }'"
+                % intf,
+                "# ",
+            )
+            ipv4 = out.split("/")[0]
+
+            port_info["ipv6"] = ipv6
+            port_info["ipv4"] = ipv4
+
+    def rescan_ports_uncached_freebsd(self):
+        unknow_interface = RED("Skipped: unknow_interface")
+
+        for port_info in self.ports_info:
+            port = port_info["port"]
+            intf = port.get_interface_name()
+            if "No such file" in intf:
+                self.logger.info("DUT: [%s] %s" % (port_info["pci"], unknow_interface))
+                continue
+            self.send_expect("ifconfig %s up" % intf, "# ")
+            time.sleep(5)
+            macaddr = port.get_mac_addr()
+            ipv6 = port.get_ipv6_addr()
+            # Unconnected ports don't have IPv6
+            if ipv6 is None:
+                ipv6 = "Not connected"
+
+            port_info["mac"] = macaddr
+            port_info["intf"] = intf
+            port_info["ipv6"] = ipv6
+
+    def load_serializer_ports(self):
+        cached_ports_info = self.serializer.load(self.PORT_INFO_CACHE_KEY)
+        if cached_ports_info is None:
+            return None
+
+        self.ports_info = cached_ports_info
+
+    def save_serializer_ports(self):
+        cached_ports_info = []
+        for port in self.ports_info:
+            port_info = {}
+            for key in list(port.keys()):
+                if type(port[key]) is str:
+                    port_info[key] = port[key]
+            cached_ports_info.append(port_info)
+        self.serializer.save(self.PORT_INFO_CACHE_KEY, cached_ports_info)
+
+    def scan_ports(self):
+        """
+        Scan ports information or just read it from cache file.
+        """
+        if self.read_cache:
+            self.load_serializer_ports()
+            self.scan_ports_cached()
+
+        if not self.read_cache or self.ports_info is None:
+            self.scan_ports_uncached()
+
+    def scan_ports_cached(self):
+        """
+        Scan cached ports, instantiate tester port
+        """
+        scan_ports_cached = getattr(self, "scan_ports_cached_%s" % self.get_os_type())
+        return scan_ports_cached()
+
+    def scan_ports_cached_linux(self):
+        """
+        Scan Linux ports and instantiate tester port
+        """
+        if self.ports_info is None:
+            return
+
+        for port_info in self.ports_info:
+            addr_array = port_info["pci"].split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+
+            port = GetNicObj(self, domain_id, bus_id, devfun_id)
+            port_info["port"] = port
+
+            self.logger.info(
+                "DUT cached: [%s %s] %s"
+                % (port_info["pci"], port_info["type"], port_info["intf"])
+            )
+
+    def scan_ports_uncached(self):
+        """
+        Scan ports and collect port's pci id, mac address, ipv6 address.
+        """
+        scan_ports_uncached = getattr(
+            self, "scan_ports_uncached_%s" % self.get_os_type()
+        )
+        return scan_ports_uncached()
+
+    def scan_ports_uncached_linux(self):
+        """
+        Scan Linux ports and collect port's pci id, mac address, ipv6 address.
+        """
+        self.ports_info = []
+
+        skipped = RED("Skipped: Unknown/not selected")
+        unknow_interface = RED("Skipped: unknow_interface")
+
+        for (pci_bus, pci_id) in self.pci_devices_info:
+            if self.check_ports_available(pci_bus, pci_id) is False:
+                self.logger.info("DUT: [%s %s] %s" % (pci_bus, pci_id, skipped))
+                continue
+
+            addr_array = pci_bus.split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+
+            port = GetNicObj(self, domain_id, bus_id, devfun_id)
+            intf = port.get_interface_name()
+            if "No such file" in intf:
+                self.logger.info("DUT: [%s] %s" % (pci_bus, unknow_interface))
+                continue
+
+            macaddr = port.get_mac_addr()
+            if "No such file" in intf:
+                self.logger.info("DUT: [%s] %s" % (pci_bus, unknow_interface))
+                continue
+
+            numa = port.socket
+            # store the port info to port mapping
+            self.ports_info.append(
+                {
+                    "port": port,
+                    "pci": pci_bus,
+                    "type": pci_id,
+                    "numa": numa,
+                    "intf": intf,
+                    "mac": macaddr,
+                }
+            )
+
+            if not port.get_interface2_name():
+                continue
+
+            intf = port.get_interface2_name()
+            macaddr = port.get_intf2_mac_addr()
+            numa = port.socket
+            # store the port info to port mapping
+            self.ports_info.append(
+                {
+                    "port": port,
+                    "pci": pci_bus,
+                    "type": pci_id,
+                    "numa": numa,
+                    "intf": intf,
+                    "mac": macaddr,
+                }
+            )
+
+    def scan_ports_uncached_freebsd(self):
+        """
+        Scan Freebsd ports and collect port's pci id, mac address, ipv6 address.
+        """
+        self.ports_info = []
+
+        skipped = RED("Skipped: Unknown/not selected")
+
+        for (pci_bus, pci_id) in self.pci_devices_info:
+
+            if not settings.accepted_nic(pci_id):
+                self.logger.info("DUT: [%s %s] %s" % (pci_bus, pci_id, skipped))
+                continue
+            addr_array = pci_bus.split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+            port = GetNicObj(self, domain_id, bus_id, devfun_id)
+            port.pci_id = pci_id
+            port.name = settings.get_nic_name(pci_id)
+            port.default_driver = settings.get_nic_driver(pci_id)
+            intf = port.get_interface_name()
+
+            macaddr = port.get_mac_addr()
+            ipv6 = port.get_ipv6_addr()
+
+            if ipv6 is None:
+                ipv6 = "Not available"
+
+            self.logger.warning("NUMA not available on FreeBSD")
+
+            self.logger.info("DUT: [%s %s] %s %s" % (pci_bus, pci_id, intf, ipv6))
+
+            # convert bsd format to linux format
+            pci_split = pci_bus.split(":")
+            pci_bus_id = hex(int(pci_split[0]))[2:]
+            if len(pci_split[1]) == 1:
+                pci_dev_str = "0" + pci_split[1]
+            else:
+                pci_dev_str = pci_split[1]
+
+            pci_str = "%s:%s.%s" % (pci_bus_id, pci_dev_str, pci_split[2])
+
+            # store the port info to port mapping
+            self.ports_info.append(
+                {
+                    "port": port,
+                    "pci": pci_str,
+                    "type": pci_id,
+                    "intf": intf,
+                    "mac": macaddr,
+                    "ipv6": ipv6,
+                    "numa": -1,
+                }
+            )
+
+    def setup_virtenv(self, virttype):
+        """
+        Setup current virtualization hypervisor type and remove elder VM ssh keys
+        """
+        self.virttype = virttype
+        # remove VM rsa keys from tester
+        remove_old_rsa_key(self.tester, self.crb["My IP"])
+
+    def generate_sriov_vfs_by_port(self, port_id, vf_num, driver="default"):
+        """
+        Generate SRIOV VFs with default driver it is bound now or specified driver.
+        """
+        port = self.ports_info[port_id]["port"]
+        port_driver = port.get_nic_driver()
+
+        if driver == "default":
+            if not port_driver:
+                self.logger.info(
+                    "No driver on specified port, can not generate SRIOV VF."
+                )
+                return None
+        else:
+            if port_driver != driver:
+                port.bind_driver(driver)
+        port.generate_sriov_vfs(vf_num)
+
+        # append the VF PCIs into the ports_info
+        sriov_vfs_pci = port.get_sriov_vfs_pci()
+        self.ports_info[port_id]["sriov_vfs_pci"] = sriov_vfs_pci
+
+        # instantiate the VF
+        vfs_port = []
+        for vf_pci in sriov_vfs_pci:
+            addr_array = vf_pci.split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+            vf_port = GetNicObj(self, domain_id, bus_id, devfun_id)
+            vfs_port.append(vf_port)
+        self.ports_info[port_id]["vfs_port"] = vfs_port
+
+        pci = self.ports_info[port_id]["pci"]
+        self.virt_pool.add_vf_on_pf(pf_pci=pci, vflist=sriov_vfs_pci)
+
+    def destroy_sriov_vfs_by_port(self, port_id):
+        port = self.ports_info[port_id]["port"]
+        vflist = []
+        port_driver = port.get_nic_driver()
+        if (
+            "sriov_vfs_pci" in self.ports_info[port_id]
+            and self.ports_info[port_id]["sriov_vfs_pci"]
+        ):
+            vflist = self.ports_info[port_id]["sriov_vfs_pci"]
+        else:
+            if not port.get_sriov_vfs_pci():
+                return
+
+        if not port_driver:
+            self.logger.info("No driver on specified port, skip destroy SRIOV VF.")
+        else:
+            sriov_vfs_pci = port.destroy_sriov_vfs()
+        self.ports_info[port_id]["sriov_vfs_pci"] = []
+        self.ports_info[port_id]["vfs_port"] = []
+
+        pci = self.ports_info[port_id]["pci"]
+        self.virt_pool.del_vf_on_pf(pf_pci=pci, vflist=vflist)
+
+    def destroy_all_sriov_vfs(self):
+
+        if self.ports_info == None:
+            return
+        for port_id in range(len(self.ports_info)):
+            self.destroy_sriov_vfs_by_port(port_id)
+
+    def load_portconf(self):
+        """
+        Load port configurations for ports_info. If manually configured info
+        not same as auto scanned, still use information in configuration file.
+        """
+        for port in self.ports_info:
+            pci_bus = port["pci"]
+            ports_cfg = self.conf.get_ports_config()
+            if pci_bus in ports_cfg:
+                port_cfg = ports_cfg[pci_bus]
+                port_cfg["source"] = "cfg"
+            else:
+                port_cfg = {}
+
+            for key in ["intf", "mac", "peer", "source"]:
+                if key in port_cfg:
+                    if key in port and port_cfg[key].lower() != port[key].lower():
+                        self.logger.warning(
+                            "CONFIGURED %s NOT SAME AS SCANNED!!!" % (key.upper())
+                        )
+                    port[key] = port_cfg[key].lower()
+            if "numa" in port_cfg:
+                if port_cfg["numa"] != port["numa"]:
+                    self.logger.warning("CONFIGURED NUMA NOT SAME AS SCANNED!!!")
+                port["numa"] = port_cfg["numa"]
+
+    def map_available_ports(self):
+        """
+        Load or generate network connection mapping list.
+        """
+        if self.read_cache:
+            self.ports_map = self.serializer.load(self.PORT_MAP_CACHE_KEY)
+
+        if not self.read_cache or self.ports_map is None:
+            self.map_available_ports_uncached()
+            self.serializer.save(self.PORT_MAP_CACHE_KEY, self.ports_map)
+
+        self.logger.warning("DUT PORT MAP: " + str(self.ports_map))
+
+    def map_available_ports_uncached(self):
+        """
+        Generate network connection mapping list.
+        """
+        nrPorts = len(self.ports_info)
+        if nrPorts == 0:
+            return
+
+        remove = []
+        self.ports_map = [-1] * nrPorts
+
+        hits = [False] * len(self.tester.ports_info)
+
+        for dutPort in range(nrPorts):
+            peer = self.get_peer_pci(dutPort)
+            dutpci = self.ports_info[dutPort]["pci"]
+            if peer is not None:
+                for remotePort in range(len(self.tester.ports_info)):
+                    if self.tester.ports_info[remotePort]["type"].lower() == "trex":
+                        if (
+                            self.tester.ports_info[remotePort]["intf"].lower()
+                            == peer.lower()
+                            or self.tester.ports_info[remotePort]["pci"].lower()
+                            == peer.lower()
+                        ):
+                            hits[remotePort] = True
+                            self.ports_map[dutPort] = remotePort
+                            break
+                    elif (
+                        self.tester.ports_info[remotePort]["pci"].lower()
+                        == peer.lower()
+                    ):
+                        hits[remotePort] = True
+                        self.ports_map[dutPort] = remotePort
+                        break
+                if self.ports_map[dutPort] == -1:
+                    self.logger.error("CONFIGURED TESTER PORT CANNOT BE FOUND!!!")
+                else:
+                    continue  # skip ping6 map
+
+            for remotePort in range(len(self.tester.ports_info)):
+                if hits[remotePort]:
+                    continue
+
+                # skip ping self port
+                remotepci = self.tester.ports_info[remotePort]["pci"]
+                if (self.crb["IP"] == self.crb["tester IP"]) and (dutpci == remotepci):
+                    continue
+
+                # skip ping those not connected port
+                ipv6 = self.get_ipv6_address(dutPort)
+                if ipv6 == "Not connected":
+                    if "ipv4" in self.tester.ports_info[remotePort]:
+                        out = self.tester.send_ping(
+                            dutPort,
+                            self.tester.ports_info[remotePort]["ipv4"],
+                            self.get_mac_address(dutPort),
+                        )
+                    else:
+                        continue
+                else:
+                    if getattr(self, "send_ping6", None):
+                        out = self.send_ping6(
+                            dutPort,
+                            self.tester.ports_info[remotePort]["ipv6"],
+                            self.get_mac_address(dutPort),
+                        )
+                    else:
+                        out = self.tester.send_ping6(
+                            remotePort, ipv6, self.get_mac_address(dutPort)
+                        )
+
+                    if out and "64 bytes from" in out:
+                        self.logger.info(
+                            "PORT MAP: [dut %d: tester %d]" % (dutPort, remotePort)
+                        )
+                        self.ports_map[dutPort] = remotePort
+                        hits[remotePort] = True
+                        if self.crb["IP"] == self.crb["tester IP"]:
+                            # remove dut port act as tester port
+                            remove_port = self.get_port_info(remotepci)
+                            if remove_port is not None:
+                                remove.append(remove_port)
+                            # skip ping from those port already act as dut port
+                            testerPort = self.tester.get_local_index(dutpci)
+                            if testerPort != -1:
+                                hits[testerPort] = True
+                        break
+
+        for port in remove:
+            self.ports_info.remove(port)
+
+    def disable_tester_ipv6(self):
+        for tester_port in self.ports_map:
+            if self.tester.ports_info[tester_port]["type"].lower() not in (
+                "ixia",
+                "trex",
+            ):
+                port = self.tester.ports_info[tester_port]["port"]
+                port.disable_ipv6()
+
+    def enable_tester_ipv6(self):
+        for tester_port in range(len(self.tester.ports_info)):
+            if self.tester.ports_info[tester_port]["type"].lower() not in (
+                "ixia",
+                "trex",
+            ):
+                port = self.tester.ports_info[tester_port]["port"]
+                port.enable_ipv6()
+
+    def check_port_occupied(self, port):
+        out = self.alt_session.send_expect("lsof -i:%d" % port, "# ")
+        if out == "":
+            return False
+        else:
+            return True
+
+    def get_maximal_vnc_num(self):
+        out = self.send_expect("ps aux | grep '\-vnc' | grep -v grep", "# ")
+        if out:
+            ports = re.findall(r"-vnc .*?:(\d+)", out)
+            for num in range(len(ports)):
+                ports[num] = int(ports[num])
+                ports.sort()
+        else:
+            ports = [
+                0,
+            ]
+        return ports[-1]
+
+    def close(self):
+        """
+        Close ssh session of DUT.
+        """
+        if self.session:
+            self.session.close()
+            self.session = None
+        if self.alt_session:
+            self.alt_session.close()
+            self.alt_session = None
+        if self.host_init_flag:
+            self.host_session.close()
+
+    def virt_exit(self):
+        """
+        Stop all unstopped hypervisors process
+        """
+        # try to kill all hypervisor process
+        for pid in self.virt_pids:
+            self.send_expect("kill -s SIGTERM %d" % pid, "# ", alt_session=True)
+            time.sleep(3)
+        self.virt_pids = []
+
+    def crb_exit(self):
+        """
+        Recover all resource before crb exit
+        """
+        self.enable_tester_ipv6()
+        self.close()
+        self.logger.logger_exit()
+
+
+class _EalParameter(object):
+    def __init__(
+        self,
+        dut: Dut,
+        fixed_prefix: bool,
+        socket: int,
+        cores: Union[str, List[int], List[str]],
+        ports: Union[List[str], List[int]],
+        port_options: Dict[Union[str, int], str],
+        prefix: str,
+        no_pci: bool,
+        b_ports: Union[List[str], List[int]],
+        vdevs: List[str],
+        other_eal_param: str,
+    ):
+        """
+        generate eal parameters character string;
+        :param dut: dut device;
+        :param fixed_prefix: use fixed file-prefix or not, when it is true,
+                             the file-prefix will not be added a timestamp
+        :param socket: the physical CPU socket index, -1 means no care cpu socket;
+        :param cores: set the core info, eg:
+                        cores=[0,1,2,3],
+                        cores=['0','1','2','3'],
+                        cores='default',
+                        cores='1S/4C/1T',
+                        cores='all';
+        param ports: set PCI allow list, eg:
+                        ports=['0000:1a:00.0', '0000:1a:00.1'],
+                        ports=[0, 1];
+        param port_options: set options of port, eg:
+                        port_options={'0000:1a:00.0': "proto_xtr=vlan"},
+                        port_options={0: "cap=dcf"};
+        param prefix: set file prefix string, eg:
+                        prefix='vf';
+        param no_pci: switch of disable PCI bus eg:
+                        no_pci=True;
+        param b_ports: skip probing a PCI device to prevent EAL from using it, eg:
+                        b_ports=['0000:1a:00.0'],
+                        b_ports=[0];
+        param vdevs: virtual device list, eg:
+                        vdevs=['net_ring0', 'net_ring1'];
+        param other_eal_param: user defined DPDK eal parameters, eg:
+                        other_eal_param='--single-file-segments';
+        """
+        self.os_type = dut.get_os_type()
+        self.fixed_prefix = fixed_prefix
+        self.socket = socket
+        self.dut = dut
+        self.cores = self._validate_cores(cores)
+        self.ports = self._validate_ports(ports)
+        self.port_options: Dict = self._validate_port_options(port_options)
+        self.prefix = prefix
+        self.no_pci = no_pci
+        self.b_ports = self._validate_ports(b_ports)
+        self.vdevs = vdevs
+        self.other_eal_param = other_eal_param
+
+    _param_validate_exception_info_template = (
+        "Invalid parameter of %s about value of %s, Please reference API doc."
+    )
+
+    @staticmethod
+    def _validate_cores(cores: Union[str, List[int], List[str]]):
+        core_string_match = r"default|all|\d+S/\d+C/\d+T|$"
+        if isinstance(cores, list) and (
+            all(map(lambda _core: type(_core) == int, cores))
+            or all(map(lambda _core: type(_core) == str, cores))
+        ):
+            return cores
+        elif type(cores) == str and re.match(core_string_match, cores, re.I):
+            return cores
+        else:
+            raise ParameterInvalidException("cores", cores)
+
+    @staticmethod
+    def _validate_ports(ports: Union[List[str], List[int]]):
+        if not isinstance(ports, list):
+            raise ParameterInvalidException("ports", ports)
+        if not (
+            all(map(lambda _port: type(_port) == int, ports))
+            or all(map(lambda _port: type(_port) == str, ports))
+            and all(
+                map(
+                    lambda _port: re.match(r"^([\d\w]+:){1,2}[\d\w]+\.[\d\w]+$", _port),
+                    ports,
+                )
+            )
+        ):
+            raise ParameterInvalidException(
+                _EalParameter._param_validate_exception_info_template % ("ports", ports)
+            )
+        return ports
+
+    @staticmethod
+    def _validate_port_options(port_options: Dict[Union[str, int], str]):
+        if not isinstance(port_options, Dict):
+            raise ParameterInvalidException(
+                _EalParameter._param_validate_exception_info_template
+                % ("port_options", port_options)
+            )
+        port_list = port_options.keys()
+        _EalParameter._validate_ports(list(port_list))
+        return port_options
+
+    @staticmethod
+    def _validate_vdev(vdev: List[str]):
+        if not isinstance(vdev, list):
+            raise ParameterInvalidException(
+                _EalParameter._param_validate_exception_info_template % ("vdev", vdev)
+            )
+
+    def _make_cores_param(self) -> str:
+        is_use_default_cores = (
+            self.cores == ""
+            or isinstance(self.cores, str)
+            and self.cores.lower() == "default"
+        )
+        if is_use_default_cores:
+            default_cores = "1S/2C/1T"
+            core_list = self.dut.get_core_list(default_cores)
+        else:
+            core_list = self._get_cores()
+
+        def _get_consecutive_cores_range(_cores: List[int]):
+            _formated_core_list = []
+            _tmp_cores_list = list(sorted(map(int, _cores)))
+            _segment = _tmp_cores_list[:1]
+            for _core_num in _tmp_cores_list[1:]:
+                if _core_num - _segment[-1] == 1:
+                    _segment.append(_core_num)
+                else:
+                    _formated_core_list.append(
+                        f"{_segment[0]}-{_segment[-1]}"
+                        if len(_segment) > 1
+                        else f"{_segment[0]}"
+                    )
+                    _index = _tmp_cores_list.index(_core_num)
+                    _formated_core_list.extend(
+                        _get_consecutive_cores_range(_tmp_cores_list[_index:])
+                    )
+                    _segment.clear()
+                    break
+            if len(_segment) > 0:
+                _formated_core_list.append(
+                    f"{_segment[0]}-{_segment[-1]}"
+                    if len(_segment) > 1
+                    else f"{_segment[0]}"
+                )
+            return _formated_core_list
+
+        return f'-l {",".join(_get_consecutive_cores_range(core_list))}'
+
+    def _make_memory_channels(self) -> str:
+        param_template = "-n {}"
+        return param_template.format(self.dut.get_memory_channels())
+
+    def _make_ports_param(self) -> str:
+        no_port_config = (
+            len(self.ports) == 0 and len(self.b_ports) == 0 and not self.no_pci
+        )
+        port_config_not_in_eal_param = not (
+            "-a" in self.other_eal_param
+            or "-b" in self.other_eal_param
+            or "--no-pci" in self.other_eal_param
+        )
+        if no_port_config and port_config_not_in_eal_param:
+            return self._make_default_ports_param()
+        else:
+            return self._get_ports_and_wraped_port_with_port_options()
+
+    def _make_default_ports_param(self) -> str:
+        pci_list = []
+        allow_option = self._make_allow_option()
+        if len(self.dut.ports_info) != 0:
+            for port_info in self.dut.ports_info:
+                pci_list.append("%s %s" % (allow_option, port_info["pci"]))
+        self.dut.logger.info(pci_list)
+        return " ".join(pci_list)
+
+    def _make_b_ports_param(self) -> str:
+        b_pci_list = []
+        if len(self.b_ports) != 0:
+            for port in self.b_ports:
+                if type(port) == int:
+                    b_pci_list.append("-b %s" % self.dut.ports_info[port]["pci"])
+                else:
+                    b_pci_list = ["-b %s" % pci for pci in self.b_ports]
+        return " ".join(b_pci_list)
+
+    def _make_no_pci_param(self) -> str:
+        if self.no_pci is True:
+            return "--no-pci"
+        else:
+            return ""
+
+    def _make_prefix_param(self) -> str:
+        if self.prefix == "":
+            fixed_file_prefix = "dpdk" + "_" + self.dut.prefix_subfix
+        else:
+            fixed_file_prefix = self.prefix
+            if not self.fixed_prefix:
+                fixed_file_prefix = fixed_file_prefix + "_" + self.dut.prefix_subfix
+        fixed_file_prefix = self._do_os_handle_with_prefix_param(fixed_file_prefix)
+        return fixed_file_prefix
+
+    def _make_vdevs_param(self) -> str:
+        if len(self.vdevs) == 0:
+            return ""
+        else:
+            _vdevs = ["--vdev " + vdev for vdev in self.vdevs]
+            return " ".join(_vdevs)
+
+    def _make_share_library_path_param(self) -> str:
+        use_shared_lib = settings.load_global_setting(settings.HOST_SHARED_LIB_SETTING)
+        shared_lib_path = settings.load_global_setting(settings.HOST_SHARED_LIB_PATH)
+        if use_shared_lib == "true" and shared_lib_path and "Virt" not in str(self.dut):
+            return " -d {} ".format(shared_lib_path)
+        return ""
+
+    def _make_default_force_max_simd_bitwidth_param(self) -> str:
+        rx_mode = settings.load_global_setting(settings.DPDK_RXMODE_SETTING)
+        param_template = " --force-max-simd-bitwidth=%s "
+        bitwith_dict = {
+            "novector": "64",
+            "sse": "128",
+            "avx2": "256",
+            "avx512": "512",
+            "nolimit": "0",
+        }
+        if (
+            rx_mode in bitwith_dict
+            and "force-max-simd-bitwidth" not in self.other_eal_param
+        ):
+            return param_template % bitwith_dict.get(rx_mode)
+        else:
+            return ""
+
+    def _get_cores(self) -> List[int]:
+        if type(self.cores) == list:
+            return self.cores
+        elif isinstance(self.cores, str):
+            return self.dut.get_core_list(self.cores, socket=self.socket)
+
+    def _get_ports_and_wraped_port_with_port_options(self) -> str:
+        w_pci_list = []
+        for port in self.ports:
+            w_pci_list.append(self._add_port_options_to(port))
+        return " ".join(w_pci_list)
+
+    def _add_port_options_to(self, port: Union[str, int]) -> str:
+        allow_option = self._make_allow_option()
+        port_mac_addr = self.dut.ports_info[port]["pci"] if type(port) == int else port
+        port_param = f"{allow_option} {port_mac_addr}"
+        port_option = self._get_port_options_from_config(port)
+        if port_option:
+            port_param = f"{port_param},{port_option}"
+        return port_param
+
+    def _get_port_options_from_config(self, port: Union[str, int]) -> str:
+        if port in list(self.port_options.keys()):
+            return self.port_options[port]
+        else:
+            return ""
+
+    def _make_allow_option(self) -> str:
+        is_new_dpdk_version = (
+            self.dut.dpdk_version > "20.11.0-rc3" or self.dut.dpdk_version == "20.11.0"
+        )
+        return "-a" if is_new_dpdk_version else "-w"
+
+    def _do_os_handle_with_prefix_param(self, file_prefix: str) -> str:
+        if self.dut.get_os_type() == "freebsd":
+            self.dut.prefix_list = []
+            file_prefix = ""
+        else:
+            self.dut.prefix_list.append(file_prefix)
+            file_prefix = "--file-prefix=" + file_prefix
+        return file_prefix
+
+    def make_eal_param(self) -> str:
+        _eal_str = " ".join(
+            [
+                self._make_cores_param(),
+                self._make_memory_channels(),
+                self._make_ports_param(),
+                self._make_b_ports_param(),
+                self._make_prefix_param(),
+                self._make_no_pci_param(),
+                self._make_vdevs_param(),
+                self._make_share_library_path_param(),
+                self._make_default_force_max_simd_bitwidth_param(),
+                # append user defined eal parameters
+                self.other_eal_param,
+            ]
+        )
+        return _eal_str
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 03/18] dts: merge DTS framework/ixia_buffer_parser.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 01/18] dts: merge DTS framework/crb.py " Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 02/18] dts: merge DTS framework/dut.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 04/18] dts: merge DTS framework/pktgen.py " Juraj Linkeš
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ixia_buffer_parser.py | 138 ++++++++++++++++++++++++++++
 1 file changed, 138 insertions(+)
 create mode 100644 dts/framework/ixia_buffer_parser.py

diff --git a/dts/framework/ixia_buffer_parser.py b/dts/framework/ixia_buffer_parser.py
new file mode 100644
index 0000000000..c48fbe694a
--- /dev/null
+++ b/dts/framework/ixia_buffer_parser.py
@@ -0,0 +1,138 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""
+Helper class that parses a list of files containing IXIA captured frames
+extracting a sequential number on them.
+
+The captured files look like this. They all contain a two lines header which
+needs to be removed.
+
+
+    Frames 1 to 10 of 20
+    Frame,Time Stamp,DA,SA,Type/Length,Data,Frame Length,Status
+    1    1203:07:01.397859720    00 00 00 00 00 00    00 00 00 00 00 00    00 01    00 00 00 00 ...
+    2    1203:07:01.397860040    00 00 00 00 00 00    00 00 00 00 00 00    00 01    00 00 00 01 ...
+    3    ...
+
+Every line after the header shows the information of a single frame. The class
+will extract the sequential number at the beginning of the packet payload.
+
+              time-stamp                                                             sequence
+         V                  V                                                       V         V
+    2    1203:07:01.397860040    00 00 00 00 00 00    00 00 00 00 00 00    00 01    00 00 00 01 ...
+
+
+Check the unit tests for more information about how the class works.
+"""
+
+
+class IXIABufferFileParser(object):
+    def __init__(self, filenames):
+        self.frames_files = []
+        self.counter = 0
+        self.__read_files(filenames)
+        self._next_file()
+
+    def __read_files(self, filenames):
+        """
+        Reads files from a list of file names and store the file objects in a
+        internal list to be used later on. It leaves the files ready to be
+        processed by reading and discarding the first two lines on each file.
+        """
+        for filename in filenames:
+            a_file = open(filename, "r")
+            self.__discard_headers(a_file)
+            self.frames_files.append(a_file)
+
+    def __discard_headers(self, frame_file):
+        """
+        Discards the first two lines (header) leaving only the frames
+        information ready to be read.
+        """
+        if frame_file.tell() == 0:
+            frame_file.readline()
+            frame_file.readline()
+
+    def __get_frame_number(self, frame):
+        """
+        Given a line from the file, it extracts the sequential number by
+        knowing exactly where it should be.
+        The counter is part of the frame's payload which is the 3rd element
+        starting from the back if we split the line by \t.
+        The counter only takes chars inside the payload.
+        """
+        counter = frame.rsplit("\t", 3)[1]
+        counter = counter[:11]
+        return int(counter.replace(" ", ""), 16)
+
+    def __change_current_file(self):
+        """
+        Points the current open file to the next available from the internal
+        list. Before making the change, it closes the 'old' current file since
+        it won't be used anymore.
+        """
+        if self.counter > 0:
+            self.current_file.close()
+        self.current_file = self.frames_files[self.counter]
+        self.counter += 1
+
+    def _next_file(self):
+        """
+        Makes the current file point to the next available file if any.
+        Otherwise the current file will be None.
+        """
+        if self.counter < len(self.frames_files):
+            self.__change_current_file()
+            return True
+        else:
+            self.current_file = None
+            return False
+
+    def read_all_frames(self):
+        """
+        Goes through all the open files on its internal list, reads one
+        line at the time and returns the sequential number on that frame.
+        When one file is completed (EOF) it will automatically switch to the
+        next one (if any) and continue reading.
+
+        This function allows calls like:
+
+            for frame in frame_parser.read_all_frames():
+                do_something(frame)
+        """
+        while True:
+            frameinfo = self.current_file.readline().strip()
+            if not frameinfo:
+                if not self._next_file():
+                    break
+                continue
+            yield self.__get_frame_number(frameinfo)
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 04/18] dts: merge DTS framework/pktgen.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (2 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 03/18] dts: merge DTS framework/ixia_buffer_parser.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 05/18] dts: merge DTS framework/pktgen_base.py " Juraj Linkeš
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/pktgen.py | 234 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 234 insertions(+)
 create mode 100644 dts/framework/pktgen.py

diff --git a/dts/framework/pktgen.py b/dts/framework/pktgen.py
new file mode 100644
index 0000000000..a1a7b2f0bb
--- /dev/null
+++ b/dts/framework/pktgen.py
@@ -0,0 +1,234 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import os
+from copy import deepcopy
+
+from scapy.all import conf
+from scapy.fields import ConditionalField
+from scapy.packet import NoPayload
+from scapy.packet import Packet as scapyPacket
+from scapy.utils import rdpcap
+
+from .pktgen_base import (
+    PKTGEN_DPDK,
+    PKTGEN_IXIA,
+    PKTGEN_IXIA_NETWORK,
+    PKTGEN_TREX,
+    STAT_TYPE,
+    TRANSMIT_CONT,
+    TRANSMIT_M_BURST,
+    TRANSMIT_S_BURST,
+    DpdkPacketGenerator,
+)
+from .pktgen_ixia import IxiaPacketGenerator
+from .pktgen_ixia_network import IxNetworkPacketGenerator
+from .pktgen_trex import TrexPacketGenerator
+
+# dts libs
+from .utils import convert_int2ip, convert_ip2int, convert_mac2long, convert_mac2str
+
+
+class PacketGeneratorHelper(object):
+    """default packet generator stream option for all streams"""
+
+    default_opt = {
+        "stream_config": {
+            "txmode": {},
+            "transmit_mode": TRANSMIT_CONT,
+            # for temporary usage because current pktgen design don't support
+            # port level configuration, here using stream configuration to pass
+            # rate percent
+            "rate": 100,
+        }
+    }
+
+    def __init__(self):
+        self.packetLayers = dict()
+
+    def _parse_packet_layer(self, pkt_object):
+        """parse one packet every layers' fields and value"""
+        if pkt_object == None:
+            return
+
+        self.packetLayers[pkt_object.name] = dict()
+        for curfield in pkt_object.fields_desc:
+            if isinstance(curfield, ConditionalField) and not curfield._evalcond(
+                pkt_object
+            ):
+                continue
+            field_value = pkt_object.getfieldval(curfield.name)
+            if isinstance(field_value, scapyPacket) or (
+                curfield.islist and curfield.holds_packets and type(field_value) is list
+            ):
+                continue
+            repr_value = curfield.i2repr(pkt_object, field_value)
+            if isinstance(repr_value, str):
+                repr_value = repr_value.replace(
+                    os.linesep, os.linesep + " " * (len(curfield.name) + 4)
+                )
+            self.packetLayers[pkt_object.name][curfield.name] = repr_value
+
+        if isinstance(pkt_object.payload, NoPayload):
+            return
+        else:
+            self._parse_packet_layer(pkt_object.payload)
+
+    def _parse_pcap(self, pcapFile, number=0):
+        """parse one packet content"""
+        pcap_pkts = []
+        if os.path.exists(pcapFile) == False:
+            warning = "{0} is not exist !".format(pcapFile)
+            raise Exception(warning)
+
+        pcap_pkts = rdpcap(pcapFile)
+        # parse packets' every layers and fields
+        if len(pcap_pkts) == 0:
+            warning = "{0} is empty".format(pcapFile)
+            raise Exception(warning)
+        elif number >= len(pcap_pkts):
+            warning = "{0} is missing No.{1} packet".format(pcapFile, number)
+            raise Exception(warning)
+        else:
+            self._parse_packet_layer(pcap_pkts[number])
+
+    def _set_pktgen_fields_config(self, pcap, suite_config):
+        """
+        get default fields value from a pcap file and unify layer fields
+        variables for trex/ixia
+        """
+        self._parse_pcap(pcap)
+        if not self.packetLayers:
+            msg = "pcap content is empty"
+            raise Exception(msg)
+        # suite fields config convert to pktgen fields config
+        fields_config = {}
+        # set ethernet protocol layer fields
+        layer_name = "mac"
+        if layer_name in list(suite_config.keys()) and "Ethernet" in self.packetLayers:
+            fields_config[layer_name] = {}
+            suite_fields = suite_config.get(layer_name)
+            pcap_fields = self.packetLayers.get("Ethernet")
+            for name, config in suite_fields.items():
+                action = config.get("action") or "default"
+                range = config.get("range") or 64
+                step = config.get("step") or 1
+                start_mac = pcap_fields.get(name)
+                end_mac = convert_mac2str(convert_mac2long(start_mac) + range - 1)
+                fields_config[layer_name][name] = {}
+                fields_config[layer_name][name]["start"] = start_mac
+                fields_config[layer_name][name]["end"] = end_mac
+                fields_config[layer_name][name]["step"] = step
+                fields_config[layer_name][name]["action"] = action
+        # set ip protocol layer fields
+        layer_name = "ip"
+        if layer_name in list(suite_config.keys()) and "IP" in self.packetLayers:
+            fields_config[layer_name] = {}
+            suite_fields = suite_config.get(layer_name)
+            pcap_fields = self.packetLayers.get("IP")
+            for name, config in suite_fields.items():
+                action = config.get("action") or "default"
+                range = config.get("range") or 64
+                step = config.get("step") or 1
+                start_ip = pcap_fields.get(name)
+                end_ip = convert_int2ip(convert_ip2int(start_ip) + range - 1)
+                fields_config[layer_name][name] = {}
+                fields_config[layer_name][name]["start"] = start_ip
+                fields_config[layer_name][name]["end"] = end_ip
+                fields_config[layer_name][name]["step"] = step
+                fields_config[layer_name][name]["action"] = action
+        # set vlan protocol layer fields, only support one layer vlan here
+        layer_name = "vlan"
+        if layer_name in list(suite_config.keys()) and "802.1Q" in self.packetLayers:
+            fields_config[layer_name] = {}
+            suite_fields = suite_config.get(layer_name)
+            pcap_fields = self.packetLayers.get("802.1Q")
+            # only support one layer vlan here, so set name to `0`
+            name = 0
+            if name in list(suite_fields.keys()):
+                config = suite_fields[name]
+                action = config.get("action") or "default"
+                range = config.get("range") or 64
+                # ignore 'L' suffix
+                if "L" in pcap_fields.get(layer_name):
+                    start_vlan = int(pcap_fields.get(layer_name)[:-1])
+                else:
+                    start_vlan = int(pcap_fields.get(layer_name))
+                end_vlan = start_vlan + range - 1
+                fields_config[layer_name][name] = {}
+                fields_config[layer_name][name]["start"] = start_vlan
+                fields_config[layer_name][name]["end"] = end_vlan
+                fields_config[layer_name][name]["step"] = 1
+                fields_config[layer_name][name]["action"] = action
+
+        return fields_config
+
+    def prepare_stream_from_tginput(
+        self, tgen_input, ratePercent, vm_config, pktgen_inst
+    ):
+        """create streams for ports, one port one stream"""
+        # set stream in pktgen
+        stream_ids = []
+        for config in tgen_input:
+            stream_id = pktgen_inst.add_stream(*config)
+            pcap = config[2]
+            _options = deepcopy(self.default_opt)
+            _options["pcap"] = pcap
+            _options["stream_config"]["rate"] = ratePercent
+            # if vm is set
+            if vm_config:
+                _options["fields_config"] = self._set_pktgen_fields_config(
+                    pcap, vm_config
+                )
+            pktgen_inst.config_stream(stream_id, _options)
+            stream_ids.append(stream_id)
+        return stream_ids
+
+
+def getPacketGenerator(tester, pktgen_type=PKTGEN_IXIA):
+    """
+    Get packet generator object
+    """
+    pktgen_type = pktgen_type.lower()
+
+    pktgen_cls = {
+        PKTGEN_DPDK: DpdkPacketGenerator,
+        PKTGEN_IXIA: IxiaPacketGenerator,
+        PKTGEN_IXIA_NETWORK: IxNetworkPacketGenerator,
+        PKTGEN_TREX: TrexPacketGenerator,
+    }
+
+    if pktgen_type in list(pktgen_cls.keys()):
+        CLS = pktgen_cls.get(pktgen_type)
+        return CLS(tester)
+    else:
+        msg = "not support <{0}> packet generator".format(pktgen_type)
+        raise Exception(msg)
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 05/18] dts: merge DTS framework/pktgen_base.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (3 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 04/18] dts: merge DTS framework/pktgen.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 06/18] dts: merge DTS framework/pktgen_ixia.py " Juraj Linkeš
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/pktgen_base.py | 740 +++++++++++++++++++++++++++++++++++
 1 file changed, 740 insertions(+)
 create mode 100644 dts/framework/pktgen_base.py

diff --git a/dts/framework/pktgen_base.py b/dts/framework/pktgen_base.py
new file mode 100644
index 0000000000..aa9a6ff874
--- /dev/null
+++ b/dts/framework/pktgen_base.py
@@ -0,0 +1,740 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import logging
+import time
+from abc import abstractmethod
+from copy import deepcopy
+from enum import Enum, unique
+from pprint import pformat
+
+from .config import PktgenConf
+from .logger import getLogger
+
+# packet generator name
+from .settings import PKTGEN, PKTGEN_DPDK, PKTGEN_IXIA, PKTGEN_IXIA_NETWORK, PKTGEN_TREX
+
+# macro definition
+TRANSMIT_CONT = "continuous"
+TRANSMIT_M_BURST = "multi_burst"
+TRANSMIT_S_BURST = "single_burst"
+
+
+@unique
+class STAT_TYPE(Enum):
+    RX = "rx"
+    TXRX = "txrx"
+
+
+class PacketGenerator(object):
+    """
+    Basic class for packet generator, define basic function for each kinds of
+    generators
+    """
+
+    def __init__(self, tester):
+        self.logger = getLogger(PKTGEN)
+        self.tester = tester
+        self.__streams = []
+        self._ports_map = []
+        self.pktgen_type = None
+
+    def _prepare_generator(self):
+        raise NotImplementedError
+
+    def prepare_generator(self):
+        self._prepare_generator()
+
+    def _get_port_pci(self, port_id):
+        raise NotImplementedError
+
+    def _convert_pktgen_port(self, port_id):
+        """
+        :param port_id:
+            index of a port in packet generator tool
+        """
+        try:
+            gen_pci = self._get_port_pci(port_id)
+            if not gen_pci:
+                msg = "can't get port {0} pci address".format(port_id)
+                raise Exception(msg)
+            for port_idx, info in enumerate(self.tester.ports_info):
+                if "pci" not in info or info["pci"] == "N/A":
+                    return -1
+                tester_pci = info["pci"]
+                if tester_pci == gen_pci:
+                    msg = "gen port {0} map test port {1}".format(port_id, port_idx)
+                    self.logger.debug(msg)
+                    return port_idx
+            else:
+                port = -1
+        except Exception as e:
+            port = -1
+
+        return port
+
+    def _get_gen_port(self, tester_pci):
+        raise NotImplementedError
+
+    def _convert_tester_port(self, port_id):
+        """
+        :param port_id:
+            index of a port in dts tester ports info
+        """
+        try:
+            info = self.tester.ports_info[port_id]
+            # limit to nic port, not including ixia port
+            if "pci" not in info or info["pci"] == "N/A":
+                return -1
+            tester_pci = info["pci"]
+            port = self._get_gen_port(tester_pci)
+            msg = "test port {0} map gen port {1}".format(port_id, port)
+            self.logger.debug(msg)
+        except Exception as e:
+            port = -1
+
+        return port
+
+    def add_stream(self, tx_port, rx_port, pcap_file):
+        pktgen_tx_port = self._convert_tester_port(tx_port)
+        pktgen_rx_port = self._convert_tester_port(rx_port)
+
+        stream_id = len(self.__streams)
+        stream = {
+            "tx_port": pktgen_tx_port,
+            "rx_port": pktgen_rx_port,
+            "pcap_file": pcap_file,
+        }
+        self.__streams.append(stream)
+
+        return stream_id
+
+    def add_streams(self, streams):
+        """' a group of streams"""
+        raise NotImplementedError
+
+    def config_stream(self, stream_id=0, opts={}):
+        if self._check_options(opts) is not True:
+            self.logger.error("Failed to configure stream[%d]" % stream_id)
+            return
+        stream = self.__streams[stream_id]
+        stream["options"] = opts
+
+    def config_streams(self, stream_ids, nic, frame_size, port_num):
+        """all streams using the default option"""
+        raise NotImplementedError
+
+    def get_streams(self):
+        return self.__streams
+
+    def _clear_streams(self):
+        raise NotImplementedError
+
+    def clear_streams(self):
+        """clear streams"""
+        self._clear_streams()
+        self.__streams = []
+
+    def _set_stream_rate_percent(self, rate_percent):
+        """set all streams' rate percent"""
+        if not self.__streams:
+            return
+        for stream in self.__streams:
+            stream["options"]["stream_config"]["rate"] = rate_percent
+
+    def _set_stream_pps(self, pps):
+        """set all streams' pps"""
+        if not self.__streams:
+            return
+        for stream in self.__streams:
+            stream["options"]["stream_config"]["pps"] = pps
+
+    def reset_streams(self):
+        self.__streams = []
+
+    def __warm_up_pktgen(self, stream_ids, options, delay):
+        """run warm up traffic before start main traffic"""
+        if not delay:
+            return
+        msg = "{1} packet generator: run traffic {0}s to warm up ... ".format(
+            delay, self.pktgen_type
+        )
+        self.logger.info(msg)
+        self._start_transmission(stream_ids, options)
+        time.sleep(delay)
+        self._stop_transmission(stream_ids)
+
+    def __get_single_throughput_statistic(self, stream_ids, stat_type=None):
+        bps_rx = []
+        pps_rx = []
+        bps_tx = []
+        pps_tx = []
+        used_rx_port = []
+        msg = "begin get port statistic ..."
+        self.logger.info(msg)
+        for stream_id in stream_ids:
+            if self.__streams[stream_id]["rx_port"] not in used_rx_port:
+                bps_rate, pps_rate = self._retrieve_port_statistic(
+                    stream_id, "throughput"
+                )
+                used_rx_port.append(self.__streams[stream_id]["rx_port"])
+                if stat_type and stat_type is STAT_TYPE.TXRX:
+                    bps_tx.append(bps_rate[0])
+                    pps_tx.append(pps_rate[0])
+
+                if isinstance(bps_rate, tuple) and isinstance(pps_rate, tuple):
+                    bps_rx.append(bps_rate[1])
+                    pps_rx.append(pps_rate[1])
+                else:
+                    bps_rx.append(bps_rate)
+                    pps_rx.append(pps_rate)
+        if stat_type and stat_type is STAT_TYPE.TXRX:
+            bps_tx_total = self._summary_statistic(bps_tx)
+            pps_tx_total = self._summary_statistic(pps_tx)
+            bps_rx_total = self._summary_statistic(bps_rx)
+            pps_rx_total = self._summary_statistic(pps_rx)
+            self.logger.info(
+                "throughput: pps_tx %f, bps_tx %f" % (pps_tx_total, bps_tx_total)
+            )
+            self.logger.info(
+                "throughput: pps_rx %f, bps_rx %f" % (pps_rx_total, bps_rx_total)
+            )
+
+            return (bps_tx_total, bps_rx_total), (pps_tx_total, pps_rx_total)
+        else:
+            bps_rx_total = self._summary_statistic(bps_rx)
+            pps_rx_total = self._summary_statistic(pps_rx)
+            self.logger.info(
+                "throughput: pps_rx %f, bps_rx %f" % (pps_rx_total, bps_rx_total)
+            )
+
+            return bps_rx_total, pps_rx_total
+
+    def __get_multi_throughput_statistic(
+        self, stream_ids, duration, interval, callback=None, stat_type=None
+    ):
+        """
+        duration: traffic duration (second)
+        interval: interval of get throughput statistics (second)
+        callback: a callback method of suite, which is used to do some actions
+            during traffic lasting.
+
+        Return: a list of throughput instead of a single tuple of pps/bps rate
+        """
+        time_elapsed = 0
+        stats = []
+        while time_elapsed < duration:
+            time.sleep(interval)
+            stats.append(self.__get_single_throughput_statistic(stream_ids, stat_type))
+            if callback and callable(callback):
+                callback()
+            time_elapsed += interval
+        return stats
+
+    def measure_throughput(self, stream_ids=[], options={}):
+        """
+        Measure throughput on each tx ports
+
+        options usage:
+            rate:
+                port rate percent, float(0--100). Default value is 100.
+
+            delay:
+                warm up time before start main traffic. If it is set, it will start
+                a delay time traffic to make sure packet generator under good status.
+                Warm up flow is ignored by default.
+
+            interval:
+                a interval time of get throughput statistic (second)
+                If set this key value, pktgen will return several throughput statistic
+                data within a duration traffic. If not set this key value, only
+                return one statistic data. It is ignored by default.
+
+            callback:
+                this key works with ``interval`` key. If it is set, the callback
+                of suite level will be executed after getting throughput statistic.
+                callback method should define as below, don't add sleep in this method.
+
+                def callback(self):
+                    xxxx()
+
+            duration:
+                traffic lasting time(second). Default value is 10 second.
+
+            stat_type(for trex only):
+                STAT_TYPE.RX  return (rx bps, rx_pps)
+                STAT_TYPE.TXRX return ((tx bps, rx_bps), (tx pps, rx_pps))
+        """
+        interval = options.get("interval")
+        callback = options.get("callback")
+        duration = options.get("duration") or 10
+        delay = options.get("delay")
+        if self.pktgen_type == PKTGEN_TREX:
+            stat_type = options.get("stat_type") or STAT_TYPE.RX
+        else:
+            if options.get("stat_type") is not None:
+                msg = (
+                    "'stat_type' option is only for trex, "
+                    "should not set when use other pktgen tools"
+                )
+                raise Exception(msg)
+            stat_type = STAT_TYPE.RX
+        self._prepare_transmission(stream_ids=stream_ids)
+        # start warm up traffic
+        self.__warm_up_pktgen(stream_ids, options, delay)
+        # main traffic
+        self._start_transmission(stream_ids, options)
+        # keep traffic within a duration time and get throughput statistic
+        if interval and duration:
+            stats = self.__get_multi_throughput_statistic(
+                stream_ids, duration, interval, callback, stat_type
+            )
+        else:
+            time.sleep(duration)
+            stats = self.__get_single_throughput_statistic(stream_ids, stat_type)
+        self._stop_transmission(stream_ids)
+        return stats
+
+    def _measure_loss(self, stream_ids=[], options={}):
+        """
+        Measure lost rate on each tx/rx ports
+        """
+        delay = options.get("delay")
+        duration = options.get("duration") or 10
+        throughput_stat_flag = options.get("throughput_stat_flag") or False
+        self._prepare_transmission(stream_ids=stream_ids)
+        # start warm up traffic
+        self.__warm_up_pktgen(stream_ids, options, delay)
+        # main traffic
+        self._start_transmission(stream_ids, options)
+        # keep traffic within a duration time
+        time.sleep(duration)
+        if throughput_stat_flag:
+            _throughput_stats = self.__get_single_throughput_statistic(stream_ids)
+        self._stop_transmission(None)
+        result = {}
+        used_rx_port = []
+        for stream_id in stream_ids:
+            port_id = self.__streams[stream_id]["rx_port"]
+            if port_id in used_rx_port:
+                continue
+            stats = self._retrieve_port_statistic(stream_id, "loss")
+            tx_pkts, rx_pkts = stats
+            lost_p = tx_pkts - rx_pkts
+            if tx_pkts <= 0:
+                loss_rate = 0
+            else:
+                loss_rate = float(lost_p) / float(tx_pkts)
+                if loss_rate < 0:
+                    loss_rate = 0
+            result[port_id] = (loss_rate, tx_pkts, rx_pkts)
+        if throughput_stat_flag:
+            return result, _throughput_stats
+        else:
+            return result
+
+    def measure_loss(self, stream_ids=[], options={}):
+        """
+        options usage:
+            rate:
+                port rate percent, float(0--100). Default value is 100.
+
+            delay:
+                warm up time before start main traffic. If it is set, it will
+                start a delay time traffic to make sure packet generator
+                under good status. Warm up flow is ignored by default.
+
+            duration:
+                traffic lasting time(second). Default value is 10 second.
+        """
+        result = self._measure_loss(stream_ids, options)
+        # here only to make sure that return value is the same as dts/etgen format
+        # In real testing scenario, this method can offer more data than it
+        return list(result.values())[0]
+
+    def _measure_rfc2544_ixnet(self, stream_ids=[], options={}):
+        """
+        used for ixNetwork
+        """
+        # main traffic
+        self._prepare_transmission(stream_ids=stream_ids)
+        self._start_transmission(stream_ids, options)
+        self._stop_transmission(None)
+        # parsing test result
+        stats = self._retrieve_port_statistic(stream_ids[0], "rfc2544")
+        tx_pkts, rx_pkts, pps = stats
+        lost_p = tx_pkts - rx_pkts
+        if tx_pkts <= 0:
+            loss_rate = 0
+        else:
+            loss_rate = float(lost_p) / float(tx_pkts)
+            if loss_rate < 0:
+                loss_rate = 0
+        result = (loss_rate, tx_pkts, rx_pkts, pps)
+        return result
+
+    def measure_latency(self, stream_ids=[], options={}):
+        """
+        Measure latency on each tx/rx ports
+
+        options usage:
+            rate:
+                port rate percent, float(0--100). Default value is 100.
+
+            delay:
+                warm up time before start main traffic. If it is set, it will
+                start a delay time transmission to make sure packet generator
+                under correct status. Warm up flow is ignored by default.
+
+            duration:
+                traffic lasting time(second). Default value is 10 second.
+        """
+        delay = options.get("delay")
+        duration = options.get("duration") or 10
+        self._prepare_transmission(stream_ids=stream_ids, latency=True)
+        # start warm up traffic
+        self.__warm_up_pktgen(stream_ids, options, delay)
+        # main traffic
+        self._start_transmission(stream_ids, options)
+        # keep traffic within a duration time
+        time.sleep(duration)
+        self._stop_transmission(None)
+
+        result = {}
+        used_rx_port = []
+        for stream_id in stream_ids:
+            port_id = self.__streams[stream_id]["rx_port"]
+            if port_id in used_rx_port:
+                continue
+            stats = self._retrieve_port_statistic(stream_id, "latency")
+            result[port_id] = stats
+        self.logger.info(result)
+
+        return result
+
+    def _check_loss_rate(self, result, permit_loss_rate):
+        """
+        support multiple link peer, if any link peer loss rate happen set
+        return value to False
+        """
+        for port_id, _result in result.items():
+            loss_rate, _, _ = _result
+            if loss_rate > permit_loss_rate:
+                return False
+        else:
+            return True
+
+    def measure_rfc2544(self, stream_ids=[], options={}):
+        """check loss rate with rate percent dropping
+
+        options usage:
+            rate:
+                port rate percent at first round testing(0 ~ 100), default is 100.
+
+            pdr:
+                permit packet drop rate, , default is 0.
+
+            drop_step:
+                port rate percent drop step(0 ~ 100), default is 1.
+
+            delay:
+                warm up time before start main traffic. If it is set, it will
+                start a delay time traffic to make sure packet generator
+                under good status. Warm up flow is ignored by default.
+
+            duration:
+                traffic lasting time(second). Default value is 10 second.
+        """
+        loss_rate_table = []
+        rate_percent = options.get("rate") or float(100)
+        permit_loss_rate = options.get("pdr") or 0
+        self.logger.info("allow loss rate: %f " % permit_loss_rate)
+        rate_step = options.get("drop_step") or 1
+        result = self._measure_loss(stream_ids, options)
+        status = self._check_loss_rate(result, permit_loss_rate)
+        loss_rate_table.append([rate_percent, result])
+        # if first time loss rate is ok, ignore left flow
+        if status:
+            # return data is the same with dts/etgen format
+            # In fact, multiple link peer have multiple loss rate value,
+            # here only pick one
+            tx_num, rx_num = list(result.values())[0][1:]
+            return rate_percent, tx_num, rx_num
+        _options = deepcopy(options)
+        # if warm up option  'delay' is set, ignore it in next work flow
+        if "delay" in _options:
+            _options.pop("delay")
+        if "rate" in _options:
+            _options.pop("rate")
+        while not status and rate_percent > 0:
+            rate_percent = rate_percent - rate_step
+            if rate_percent <= 0:
+                msg = "rfc2544 run under zero rate"
+                self.logger.warning(msg)
+                break
+            self._clear_streams()
+            # set stream rate percent to custom value
+            self._set_stream_rate_percent(rate_percent)
+            # run loss rate testing
+            result = self._measure_loss(stream_ids, _options)
+            loss_rate_table.append([rate_percent, result])
+            status = self._check_loss_rate(result, permit_loss_rate)
+        self.logger.info(pformat(loss_rate_table))
+        self.logger.info("zero loss rate percent is %f" % rate_percent)
+        # use last result as return data to keep the same with dts/etgen format
+        # In fact, multiple link peer have multiple loss rate value,
+        # here only pick one
+        last_result = loss_rate_table[-1]
+        rate_percent = last_result[0]
+        tx_num, rx_num = list(last_result[1].values())[0][1:]
+        return rate_percent, tx_num, rx_num
+
+    def measure_rfc2544_with_pps(self, stream_ids=[], options={}):
+        """
+        check loss rate with pps bisecting.(not implemented)
+
+        Currently, ixia/trex use rate percent to control port flow rate,
+        pps not supported.
+        """
+        max_pps = options.get("max_pps")
+        min_pps = options.get("min_pps")
+        step = options.get("step") or 10000
+        permit_loss_rate = options.get("permit_loss_rate") or 0.0001
+        # traffic parameters
+        loss_pps_table = []
+        pps = traffic_pps_max = max_pps
+        traffic_pps_min = min_pps
+
+        while True:
+            # set stream rate percent to custom value
+            self._set_stream_pps(pps)
+            # run loss rate testing
+            _options = deepcopy(options)
+            result = self._measure_loss(stream_ids, _options)
+            loss_pps_table.append([pps, result])
+            status = self._check_loss_rate(result, permit_loss_rate)
+            if status:
+                traffic_pps_max = pps
+            else:
+                traffic_pps_min = pps
+            if traffic_pps_max - traffic_pps_min < step:
+                break
+            pps = (traffic_pps_max - traffic_pps_min) / 2 + traffic_pps_min
+
+        self.logger.info("zero loss pps is %f" % pps)
+        # use last result as return data to keep the same with dts/etgen format
+        # In fact, multiple link peer have multiple loss rate value,
+        # here only pick one
+        return list(loss_pps_table[-1][1].values())[0]
+
+    def measure_rfc2544_dichotomy(self, stream_ids=[], options={}):
+        """check loss rate using dichotomy algorithm
+
+        options usage:
+            delay:
+                warm up time before start main traffic. If it is set, it will
+                start a delay time traffic to make sure packet generator
+                under good status. Warm up flow is ignored by default.
+
+            duration:
+                traffic lasting time(second). Default value is 10 second.
+
+            min_rate:
+                lower bound rate percent , default is 0.
+
+            max_rate:
+                upper bound rate percent , default is 100.
+
+            pdr:
+                permit packet drop rate(<1.0), default is 0.
+
+            accuracy :
+                dichotomy algorithm accuracy, default 0.001.
+        """
+        if self.pktgen_type == PKTGEN_IXIA_NETWORK:
+            return self._measure_rfc2544_ixnet(stream_ids, options)
+
+        max_rate = options.get("max_rate") or 100.0
+        min_rate = options.get("min_rate") or 0.0
+        accuracy = options.get("accuracy") or 0.001
+        permit_loss_rate = options.get("pdr") or 0.0
+        duration = options.get("duration") or 10.0
+        throughput_stat_flag = options.get("throughput_stat_flag") or False
+        # start warm up traffic
+        delay = options.get("delay")
+        _options = {"duration": duration}
+        if delay:
+            self._prepare_transmission(stream_ids=stream_ids)
+            self.__warm_up_pktgen(stream_ids, _options, delay)
+            self._clear_streams()
+        # traffic parameters for dichotomy algorithm
+        loss_rate_table = []
+        hit_result = None
+        hit_rate = 0
+        rate = traffic_rate_max = max_rate
+        traffic_rate_min = min_rate
+        while True:
+            # run loss rate testing
+            _options = {
+                "throughput_stat_flag": throughput_stat_flag,
+                "duration": duration,
+            }
+            result = self._measure_loss(stream_ids, _options)
+            loss_rate_table.append([rate, result])
+            status = self._check_loss_rate(
+                result[0] if throughput_stat_flag else result, permit_loss_rate
+            )
+            # if upper bound rate percent hit, quit the left flow
+            if rate == max_rate and status:
+                hit_result = result
+                hit_rate = rate
+                break
+            # if lower bound rate percent not hit, quit the left flow
+            if rate == min_rate and not status:
+                break
+            if status:
+                traffic_rate_min = rate
+                hit_result = result
+                hit_rate = rate
+            else:
+                traffic_rate_max = rate
+            if traffic_rate_max - traffic_rate_min < accuracy:
+                break
+            rate = (traffic_rate_max - traffic_rate_min) / 2 + traffic_rate_min
+            self._clear_streams()
+            # set stream rate percent to custom value
+            self._set_stream_rate_percent(rate)
+
+        if throughput_stat_flag:
+            if not hit_result or not hit_result[0]:
+                msg = (
+                    "expected permit loss rate <{0}> "
+                    "not between rate {1} and rate {2}"
+                ).format(permit_loss_rate, max_rate, min_rate)
+                self.logger.error(msg)
+                self.logger.info(pformat(loss_rate_table))
+                ret_value = 0, result[0][0][1], result[0][0][2], 0
+            else:
+                self.logger.debug(pformat(loss_rate_table))
+                ret_value = (
+                    hit_rate,
+                    hit_result[0][0][1],
+                    hit_result[0][0][2],
+                    hit_result[1][1],
+                )
+        else:
+            if not hit_result:
+                msg = (
+                    "expected permit loss rate <{0}> "
+                    "not between rate {1} and rate {2}"
+                ).format(permit_loss_rate, max_rate, min_rate)
+                self.logger.error(msg)
+                self.logger.info(pformat(loss_rate_table))
+                ret_value = 0, result[0][1], result[0][2]
+            else:
+                self.logger.debug(pformat(loss_rate_table))
+                ret_value = hit_rate, hit_result[0][1], hit_result[0][2]
+        self.logger.info("zero loss rate is %f" % hit_rate)
+
+        return ret_value
+
+    def measure(self, stream_ids, traffic_opt):
+        """
+        use as an unify interface method for packet generator
+        """
+        method = traffic_opt.get("method")
+        if method == "throughput":
+            result = self.measure_throughput(stream_ids, traffic_opt)
+        elif method == "latency":
+            result = self.measure_latency(stream_ids, traffic_opt)
+        elif method == "loss":
+            result = self.measure_loss(stream_ids, traffic_opt)
+        elif method == "rfc2544":
+            result = self.measure_rfc2544(stream_ids, traffic_opt)
+        elif method == "rfc2544_with_pps":
+            result = self.measure_rfc2544_with_pps(stream_ids, traffic_opt)
+        elif method == "rfc2544_dichotomy":
+            result = self.measure_rfc2544_dichotomy(stream_ids, traffic_opt)
+        else:
+            result = None
+
+        return result
+
+    def _summary_statistic(self, array=[]):
+        """
+        Summary all values in statistic array
+        """
+        summary = 0.000
+        for value in array:
+            summary += value
+
+        return summary
+
+    def _get_stream(self, stream_id):
+        return self.__streams[stream_id]
+
+    def _get_generator_conf_instance(self):
+        conf_inst = PktgenConf(self.pktgen_type)
+        pktgen_inst_type = conf_inst.pktgen_conf.get_sections()
+        if len(pktgen_inst_type) < 1:
+            msg = (
+                "packet generator <{0}> has no configuration " "in pktgen.cfg"
+            ).format(self.pktgen_type)
+            raise Exception(msg)
+        return conf_inst
+
+    @abstractmethod
+    def _prepare_transmission(self, stream_ids=[], latency=False):
+        pass
+
+    @abstractmethod
+    def _start_transmission(self, stream_ids, options={}):
+        pass
+
+    @abstractmethod
+    def _stop_transmission(self, stream_id):
+        pass
+
+    @abstractmethod
+    def _retrieve_port_statistic(self, stream_id, mode):
+        pass
+
+    @abstractmethod
+    def _check_options(self, opts={}):
+        pass
+
+    @abstractmethod
+    def quit_generator(self):
+        pass
+
+
+class DpdkPacketGenerator(PacketGenerator):
+    pass  # not implemented
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 06/18] dts: merge DTS framework/pktgen_ixia.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (4 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 05/18] dts: merge DTS framework/pktgen_base.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 07/18] dts: merge DTS framework/pktgen_ixia_network.py " Juraj Linkeš
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/pktgen_ixia.py | 1869 ++++++++++++++++++++++++++++++++++
 1 file changed, 1869 insertions(+)
 create mode 100644 dts/framework/pktgen_ixia.py

diff --git a/dts/framework/pktgen_ixia.py b/dts/framework/pktgen_ixia.py
new file mode 100644
index 0000000000..9851e567a4
--- /dev/null
+++ b/dts/framework/pktgen_ixia.py
@@ -0,0 +1,1869 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2019 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+import os
+import re
+import string
+import time
+from pprint import pformat
+
+from scapy.packet import Packet
+from scapy.utils import wrpcap
+
+from .pktgen_base import (
+    PKTGEN_IXIA,
+    TRANSMIT_CONT,
+    TRANSMIT_M_BURST,
+    TRANSMIT_S_BURST,
+    PacketGenerator,
+)
+from .settings import SCAPY2IXIA
+from .ssh_connection import SSHConnection
+from .utils import convert_int2ip, convert_ip2int, convert_mac2long, convert_mac2str
+
+
+class Ixia(SSHConnection):
+    """
+    IXIA performance measurement class.
+    """
+
+    def __init__(self, tester, ixiaPorts, logger):
+        self.tester = tester
+        self.NAME = PKTGEN_IXIA
+        super(Ixia, self).__init__(
+            self.get_ip_address(),
+            self.NAME,
+            self.tester.get_username(),
+            self.get_password(),
+        )
+        self.logger = logger
+        super(Ixia, self).init_log(self.logger)
+
+        self.tcl_cmds = []
+        self.chasId = None
+        self.conRelation = {}
+
+        ixiaRef = self.NAME
+        if ixiaRef is None or ixiaRef not in ixiaPorts:
+            return
+
+        self.ixiaVersion = ixiaPorts[ixiaRef]["Version"]
+        self.ports = ixiaPorts[ixiaRef]["Ports"]
+
+        if "force100g" in ixiaPorts[ixiaRef]:
+            self.enable100g = ixiaPorts[ixiaRef]["force100g"]
+        else:
+            self.enable100g = "disable"
+
+        self.logger.debug(self.ixiaVersion)
+        self.logger.debug(self.ports)
+
+        self.tclServerIP = ixiaPorts[ixiaRef]["IP"]
+
+        # prepare tcl shell and ixia library
+        self.send_expect("tclsh", "% ")
+        self.send_expect("source ./IxiaWish.tcl", "% ")
+        self.send_expect("set ::env(IXIA_VERSION) %s" % self.ixiaVersion, "% ")
+        out = self.send_expect("package req IxTclHal", "% ")
+        self.logger.debug("package req IxTclHal return:" + out)
+        if self.ixiaVersion in out:
+            if not self.tcl_server_login():
+                self.close()
+                self.session = None
+            for port in self.ports:
+                port["speed"] = self.get_line_rate(self.chasId, port)
+        # ixia port stream management table
+        self.stream_index = {}
+        self.stream_total = {}
+
+    def get_line_rate(self, chasid, port):
+        ixia_port = "%d %d %d" % (self.chasId, port["card"], port["port"])
+        return self.send_expect("stat getLineSpeed %s" % ixia_port, "%")
+
+    def get_ip_address(self):
+        return self.tester.get_ip_address()
+
+    def get_password(self):
+        return self.tester.get_password()
+
+    def add_tcl_cmd(self, cmd):
+        """
+        Add one tcl command into command list.
+        """
+        self.tcl_cmds.append(cmd)
+
+    def add_tcl_cmds(self, cmds):
+        """
+        Add one tcl command list into command list.
+        """
+        self.tcl_cmds += cmds
+
+    def clean(self):
+        """
+        Clean ownership of IXIA devices and logout tcl session.
+        """
+        self.send_expect("clearOwnershipAndLogout", "% ")
+        self.close()
+
+    def parse_pcap(self, fpcap):
+        # save Packet instance to pcap file
+        if isinstance(fpcap, Packet):
+            pcap_path = "/root/temp.pcap"
+            if os.path.exists(pcap_path):
+                os.remove(pcap_path)
+            wrpcap(pcap_path, fpcap)
+        else:
+            pcap_path = fpcap
+
+        dump_str1 = "cmds = []\n"
+        dump_str2 = "for i in rdpcap('%s', -1):\n" % pcap_path
+        dump_str3 = (
+            "    if 'VXLAN' in i.command():\n"
+            + "        vxlan_str = ''\n"
+            + "        l = len(i[VXLAN])\n"
+            + "        vxlan = str(i[VXLAN])\n"
+            + "        first = True\n"
+            + "        for j in range(l):\n"
+            + "            if first:\n"
+            + '                vxlan_str += "VXLAN(hexval=\'%02X" %ord(vxlan[j])\n'
+            + "                first = False\n"
+            + "            else:\n"
+            + '                vxlan_str += " %02X" %ord(vxlan[j])\n'
+            + '        vxlan_str += "\')"\n'
+            + '        command = re.sub(r"VXLAN(.*)", vxlan_str, i.command())\n'
+            + "    else:\n"
+            + "        command = i.command()\n"
+            + "    cmds.append(command)\n"
+            + "print(cmds)\n"
+            + "exit()"
+        )
+
+        f = open("dumppcap.py", "w")
+        f.write(dump_str1)
+        f.write(dump_str2)
+        f.write(dump_str3)
+        f.close()
+
+        self.session.copy_file_to("dumppcap.py")
+        out = self.send_expect("scapy -c dumppcap.py 2>/dev/null", "% ", 120)
+        flows = eval(out)
+        return flows
+
+    def macToTclFormat(self, macAddr):
+        """
+        Convert normal mac address format into IXIA's format.
+        """
+        macAddr = macAddr.upper()
+        return "%s %s %s %s %s %s" % (
+            macAddr[:2],
+            macAddr[3:5],
+            macAddr[6:8],
+            macAddr[9:11],
+            macAddr[12:14],
+            macAddr[15:17],
+        )
+
+    def set_ether_fields(self, fields, default_fields):
+        """
+        Configure Ether protocol field value.
+        """
+        addr_mode = {
+            # decrement the MAC address for as many numSA/numDA specified
+            "dec": "decrement",
+            # increment the MAC address for as many numSA/numDA specified
+            "inc": "increment",
+            # Generate random destination MAC address for each frame
+            "random": "ctrRandom",
+            # set RepeatCounter mode to be idle as default
+            "default": "idle",
+        }
+
+        cmds = []
+        for name, config in fields.items():
+            default_config = default_fields.get(name)
+            mac_start = config.get("start") or default_config.get("start")
+            mac_end = config.get("end")
+            step = config.get("step") or 1
+            action = config.get("action") or default_config.get("action")
+            prefix = "sa" if name == "src" else "da"
+            if action == "dec" and mac_end:
+                cmds.append('stream config -{0} "{1}"'.format(prefix, mac_end))
+            else:
+                cmds.append('stream config -{0} "{1}"'.format(prefix, mac_start))
+            if step:
+                cmds.append("stream config -{0}Step {1}".format(prefix, step))
+                # if not enable ContinueFromLastValue, the mac will always be start_mac
+                if prefix == "sa":
+                    cmds.append("stream config -enableSaContinueFromLastValue true")
+                elif prefix == "da":
+                    cmds.append("stream config -enableDaContinueFromLastValue true")
+            if action:
+                cmds.append(
+                    "stream config -{0}RepeatCounter {1}".format(
+                        prefix, addr_mode.get(action)
+                    )
+                )
+            if mac_end:
+                mac_start_int = convert_mac2long(mac_start)
+                mac_end_int = convert_mac2long(mac_end)
+                flow_num = mac_end_int - mac_start_int + 1
+                if flow_num <= 0:
+                    msg = "end mac should not be bigger than start mac"
+                    raise Exception(msg)
+            else:
+                flow_num = None
+
+            if flow_num:
+                cmds.append(
+                    "stream config -num{0} {1}".format(prefix.upper(), flow_num)
+                )
+            # clear default field after it has been set
+            default_fields.pop(name)
+        # if some filed not set, set it here
+        if default_fields:
+            for name, config in default_fields.items():
+                ip_start = config.get("start")
+                prefix = "sa" if name == "src" else "da"
+                cmds.append('stream config -{0} "{1}"'.format(prefix, ip_start))
+
+        return cmds
+
+    def ether(self, port, vm, src, dst, type):
+        """
+        Configure Ether protocol.
+        """
+        fields = vm.get("mac")
+        srcMac = self.macToTclFormat(src)
+        dstMac = self.macToTclFormat(dst)
+        # common command setting
+        self.add_tcl_cmd("protocol config -ethernetType ethernetII")
+        cmds = []
+        # if vm has been set, pick pcap fields' as default value
+        if fields:
+            default_fields = {
+                "src": {
+                    "action": "default",
+                    "start": src,
+                },
+                "dst": {
+                    "action": "default",
+                    "start": dst,
+                },
+            }
+            # set custom setting for field actions
+            cmds = self.set_ether_fields(fields, default_fields)
+            # set them in tcl commands group
+            self.add_tcl_cmds(cmds)
+        else:
+            self.add_tcl_cmd('stream config -sa "%s"' % srcMac)
+            self.add_tcl_cmd('stream config -da "%s"' % dstMac)
+
+    def set_ip_fields(self, fields, default_fields):
+        addr_mode = {
+            # increment the host portion of the IP address for as many
+            # IpAddrRepeatCount specified
+            "dec": "ipDecrHost",
+            # increment the host portion of the IP address for as many
+            # IpAddrRepeatCount specified
+            "inc": "ipIncrHost",
+            # Generate random IP addresses
+            "random": "ipRandom",
+            # no change to IP address regardless of IpAddrRepeatCount
+            "idle": "ipIdle",
+            # set default
+            "default": "ipIdle",
+        }
+        cmds = []
+        for name, config in fields.items():
+            default_config = default_fields.get(name)
+            fv_name = "IP.{0}".format(name)
+            ip_start = config.get("start") or default_config.get("start")
+            ip_end = config.get("end")
+            if ip_end:
+                ip_start_int = convert_ip2int(ip_start)
+                ip_end_int = convert_ip2int(ip_end)
+                flow_num = ip_end_int - ip_start_int + 1
+                if flow_num <= 0:
+                    msg = "end ip address parameter is wrong"
+                    raise Exception(msg)
+            else:
+                flow_num = None
+
+            mask = config.get("mask")
+            _step = config.get("step")
+            step = int(_step) if _step and isinstance(_step, str) else _step or 1
+            action = config.get("action")
+            # get ixia command prefix
+            prefix = "source" if name == "src" else "dest"
+            # set command
+            if action == "dec" and ip_end:
+                cmds.append('ip config -{0}IpAddr "{1}"'.format(prefix, ip_end))
+            else:
+                cmds.append('ip config -{0}IpAddr "{1}"'.format(prefix, ip_start))
+            if flow_num:
+                cmds.append(
+                    "ip config -{0}IpAddrRepeatCount {1}".format(prefix, flow_num)
+                )
+
+            cmds.append(
+                "ip config -{0}IpAddrMode {1}".format(
+                    prefix, addr_mode.get(action or "default")
+                )
+            )
+
+            if mask:
+                cmds.append("ip config -{0}IpMask '{1}'".format(prefix, mask))
+            # clear default field after it has been set
+            default_fields.pop(name)
+        # if all fields are set
+        if not default_fields:
+            return cmds
+        # if some filed not set, set it here
+        for name, config in default_fields.items():
+            ip_start = config.get("start")
+            prefix = "source" if name == "src" else "dest"
+            cmds.append('ip config -{0}IpAddr "{1}"'.format(prefix, ip_start))
+            cmds.append(
+                "ip config -{0}IpAddrMode {1}".format(prefix, addr_mode.get("default"))
+            )
+
+        return cmds
+
+    def ip(
+        self,
+        port,
+        vm,
+        frag,
+        src,
+        proto,
+        tos,
+        dst,
+        chksum,
+        len,
+        version,
+        flags,
+        ihl,
+        ttl,
+        id,
+        options=None,
+    ):
+        """
+        Configure IP protocol.
+        """
+        fields = vm.get("ip")
+        # common command setting
+        self.add_tcl_cmd("protocol config -name ip")
+        # if fields has been set
+        if fields:
+            # pick pcap fields' as default value
+            default_fields = {
+                "src": {
+                    "action": "default",
+                    "start": src,
+                },
+                "dst": {
+                    "action": "default",
+                    "start": dst,
+                },
+            }
+            # set custom setting for field actions
+            cmds = self.set_ip_fields(fields, default_fields)
+            # append custom setting
+            self.add_tcl_cmds(cmds)
+        else:
+            self.add_tcl_cmd('ip config -sourceIpAddr "%s"' % src)
+            self.add_tcl_cmd('ip config -destIpAddr "%s"' % dst)
+        # common command setting
+        self.add_tcl_cmd("ip config -ttl %d" % ttl)
+        self.add_tcl_cmd("ip config -totalLength %d" % len)
+        self.add_tcl_cmd("ip config -fragment %d" % frag)
+        self.add_tcl_cmd("ip config -ipProtocol {0}".format(proto))
+        self.add_tcl_cmd("ip config -identifier %d" % id)
+        self.add_tcl_cmd("stream config -framesize %d" % (len + 18))
+        # set stream setting in port
+        self.add_tcl_cmd("ip set %s" % port)
+
+    def ipv6(self, port, vm, version, tc, fl, plen, nh, hlim, src, dst):
+        """
+        Configure IPv6 protocol.
+        """
+        self.add_tcl_cmd("protocol config -name ipV6")
+        self.add_tcl_cmd("ipV6 setDefault")
+        self.add_tcl_cmd('ipV6 config -destAddr "%s"' % self.ipv6_to_tcl_format(dst))
+        self.add_tcl_cmd('ipV6 config -sourceAddr "%s"' % self.ipv6_to_tcl_format(src))
+        self.add_tcl_cmd("ipV6 config -flowLabel %d" % fl)
+        self.add_tcl_cmd("ipV6 config -nextHeader %d" % nh)
+        self.add_tcl_cmd("ipV6 config -hopLimit %d" % hlim)
+        self.add_tcl_cmd("ipV6 config -trafficClass %d" % tc)
+        self.add_tcl_cmd("ipV6 clearAllExtensionHeaders")
+        self.add_tcl_cmd("ipV6 addExtensionHeader %d" % nh)
+
+        self.add_tcl_cmd("stream config -framesize %d" % (plen + 40 + 18))
+        self.add_tcl_cmd("ipV6 set %s" % port)
+
+    def udp(self, port, vm, dport, sport, len, chksum):
+        """
+        Configure UDP protocol.
+        """
+        self.add_tcl_cmd("udp setDefault")
+        self.add_tcl_cmd("udp config -sourcePort %d" % sport)
+        self.add_tcl_cmd("udp config -destPort %d" % dport)
+        self.add_tcl_cmd("udp config -length %d" % len)
+        self.add_tcl_cmd("udp set %s" % port)
+
+    def vxlan(self, port, vm, hexval):
+        self.add_tcl_cmd("protocolPad setDefault")
+        self.add_tcl_cmd("protocol config -enableProtocolPad true")
+        self.add_tcl_cmd('protocolPad config -dataBytes "%s"' % hexval)
+        self.add_tcl_cmd("protocolPad set %s" % port)
+
+    def tcp(
+        self,
+        port,
+        vm,
+        sport,
+        dport,
+        seq,
+        ack,
+        dataofs,
+        reserved,
+        flags,
+        window,
+        chksum,
+        urgptr,
+        options=None,
+    ):
+        """
+        Configure TCP protocol.
+        """
+        self.add_tcl_cmd("tcp setDefault")
+        self.add_tcl_cmd("tcp config -sourcePort %d" % sport)
+        self.add_tcl_cmd("tcp config -destPort %d" % dport)
+        self.add_tcl_cmd("tcp set %s" % port)
+
+    def sctp(self, port, vm, sport, dport, tag, chksum):
+        """
+        Configure SCTP protocol.
+        """
+        self.add_tcl_cmd("tcp config -sourcePort %d" % sport)
+        self.add_tcl_cmd("tcp config -destPort %d" % dport)
+        self.add_tcl_cmd("tcp set %s" % port)
+
+    def set_dot1q_fields(self, fields):
+        """
+        Configure 8021Q protocol field name.
+        """
+        addr_mode = {
+            # The VlanID tag is decremented by step for repeat number of times
+            "dec": "vDecrement",
+            # The VlanID tag is incremented by step for repeat number of times
+            "inc": "vIncrement",
+            # Generate random VlanID tag for each frame
+            "random": "vCtrRandom",
+            # No change to VlanID tag regardless of repeat
+            "idle": "vIdle",
+        }
+        cmds = []
+        for name, config in fields.items():
+            fv_name = "8021Q.{0}".format(name)
+            vlan_start = config.get("start") or 0
+            vlan_end = config.get("end") or 256
+            if vlan_end:
+                flow_num = vlan_end - vlan_start + 1
+                if flow_num <= 0:
+                    msg = "end vlan id parameter is wrong"
+                    raise Exception(msg)
+            else:
+                flow_num = None
+            step = config.get("step") or 1
+            action = config.get("action")
+            # ------------------------------------------------
+            # set command
+            if step:
+                cmds.append("vlan config -step {0}".format(step))
+            if flow_num:
+                cmds.append("vlan config -repeat {0}".format(flow_num))
+            if action:
+                cmds.append("vlan config -mode {0}".format(addr_mode.get(action)))
+        return cmds
+
+    def dot1q(self, port, vm, prio, id, vlan, type):
+        """
+        Configure 8021Q protocol.
+        """
+        fields = vm.get("vlan")
+        # common command setting
+        self.add_tcl_cmd("protocol config -enable802dot1qTag true")
+        # if fields has been set
+        if fields:
+            # set custom setting for field actions
+            cmds = self.set_dot1q_fields(fields)
+            self.add_tcl_cmds(cmds)
+        self.add_tcl_cmd("vlan config -vlanID %d" % vlan)
+        self.add_tcl_cmd("vlan config -userPriority %d" % prio)
+        # set stream in port
+        self.add_tcl_cmd("vlan set %s" % port)
+
+    def config_stream(
+        self, fpcap, vm, port_index, rate_percent, stream_id=1, latency=False
+    ):
+        """
+        Configure IXIA stream and enable multiple flows.
+        """
+        ixia_port = self.get_ixia_port(port_index)
+        flows = self.parse_pcap(fpcap)
+        if not flows:
+            msg = "flow has no format, it should be one."
+            raise Exception(msg)
+        if len(flows) >= 2:
+            msg = "flow contain more than one format, it should be one."
+            raise Exception(msg)
+
+        # set commands at first stream
+        if stream_id == 1:
+            self.add_tcl_cmd("ixGlobalSetDefault")
+        # set burst stream if burst stream is required
+        stream_config = vm.get("stream_config")
+        transmit_mode = stream_config.get("transmit_mode") or TRANSMIT_CONT
+        if transmit_mode == TRANSMIT_S_BURST:
+            cmds = self.config_single_burst_stream(
+                stream_config.get("txmode"), rate_percent
+            )
+            self.add_tcl_cmds(cmds)
+        else:
+            self.config_ixia_stream(
+                rate_percent, self.stream_total.get(port_index), latency
+            )
+
+        pat = re.compile(r"(\w+)\((.*)\)")
+        for flow in flows:
+            for header in flow.split("/"):
+                match = pat.match(header)
+                params = eval("dict(%s)" % match.group(2))
+                method_name = match.group(1)
+                if method_name == "VXLAN":
+                    method = getattr(self, method_name.lower())
+                    method(ixia_port, vm.get("fields_config", {}), **params)
+                    break
+                if method_name in SCAPY2IXIA:
+                    method = getattr(self, method_name.lower())
+                    method(ixia_port, vm.get("fields_config", {}), **params)
+            self.add_tcl_cmd("stream set %s %d" % (ixia_port, stream_id))
+            # only use one packet format in pktgen
+            break
+
+        # set commands at last stream
+        if stream_id >= self.stream_total[port_index]:
+            self.add_tcl_cmd("stream config -dma gotoFirst")
+            self.add_tcl_cmd("stream set %s %d" % (ixia_port, stream_id))
+
+    def config_single_burst_stream(self, txmode, rate_percent):
+        """configure burst stream."""
+        gapUnits = {
+            # (default) Sets units of time for gap to nanoseconds
+            "ns": "gapNanoSeconds",
+            # Sets units of time for gap to microseconds
+            "us": "gapMicroSeconds",
+            # Sets units of time for gap to milliseconds
+            "m": "gapMilliSeconds",
+            # Sets units of time for gap to seconds
+            "s": "gapSeconds",
+        }
+        pkt_count = 1
+        burst_count = txmode.get("total_pkts", 32)
+        frameType = txmode.get("frameType") or {}
+        time_unit = frameType.get("type", "ns")
+        gapUnit = (
+            gapUnits.get(time_unit)
+            if time_unit in list(gapUnits.keys())
+            else gapUnits.get("ns")
+        )
+        # The inter-stream gap is the delay in clock ticks between stream.
+        # This delay comes after the receive trigger is enabled. Setting this
+        # option to 0 means no delay. (default = 960.0)
+        isg = frameType.get("isg", 100)
+        # The inter-frame gap specified in clock ticks (default = 960.0).
+        ifg = frameType.get("ifg", 100)
+        # Inter-Burst Gap is the delay between bursts of frames in clock ticks
+        # (see ifg option for definition of clock ticks). If the IBG is set to
+        # 0 then the IBG is equal to the ISG and the IBG becomes disabled.
+        # (default = 960.0)
+        ibg = frameType.get("ibg", 100)
+        frame_cmds = [
+            "stream config -rateMode usePercentRate",
+            "stream config -percentPacketRate %s" % rate_percent,
+            "stream config -dma stopStream",
+            "stream config -rateMode useGap",
+            "stream config -gapUnit {0}".format(gapUnit),
+            "stream config -numFrames {0}".format(pkt_count),
+            "stream config -numBursts {0}".format(burst_count),
+            "stream config -ifg {0}".format(ifg),
+            "stream config -ifgType gapFixed",
+            #             "stream config -enableIbg true",   # reserve
+            #             "stream config -ibg {0}".format(ibg), # reserve
+            #             "stream config -enableIsg true", # reserve
+            #             "stream config -isg {0}".format(isg), # reserve
+            "stream config -frameSizeType sizeFixed",
+        ]
+
+        return frame_cmds
+
+    def config_ixia_stream(self, rate_percent, total_flows, latency):
+        """
+        Configure IXIA stream with rate and latency.
+        Override this method if you want to add custom stream configuration.
+        """
+        self.add_tcl_cmd("stream config -rateMode usePercentRate")
+        self.add_tcl_cmd("stream config -percentPacketRate %s" % rate_percent)
+        self.add_tcl_cmd("stream config -numFrames 1")
+        if total_flows == 1:
+            self.add_tcl_cmd("stream config -dma contPacket")
+        else:
+            self.add_tcl_cmd("stream config -dma advance")
+        # request by packet Group
+        if latency is not False:
+            self.add_tcl_cmd("stream config -fir true")
+
+    def tcl_server_login(self):
+        """
+        Connect to tcl server and take ownership of all the ports needed.
+        """
+        out = self.send_expect("ixConnectToTclServer %s" % self.tclServerIP, "% ", 30)
+        self.logger.debug("ixConnectToTclServer return:" + out)
+        if out.strip()[-1] != "0":
+            return False
+
+        self.send_expect("ixLogin IxiaTclUser", "% ")
+
+        out = self.send_expect("ixConnectToChassis %s" % self.tclServerIP, "% ", 30)
+        if out.strip()[-1] != "0":
+            return False
+
+        out = self.send_expect(
+            "set chasId [ixGetChassisID %s]" % self.tclServerIP, "% "
+        )
+        self.chasId = int(out.strip())
+
+        out = self.send_expect(
+            "ixClearOwnership [list %s]"
+            % " ".join(
+                [
+                    "[list %d %d %d]" % (self.chasId, item["card"], item["port"])
+                    for item in self.ports
+                ]
+            ),
+            "% ",
+            10,
+        )
+        if out.strip()[-1] != "0":
+            self.logger.info("Force to take ownership:")
+            out = self.send_expect(
+                "ixTakeOwnership [list %s] force"
+                % " ".join(
+                    [
+                        "[list %d %d %d]" % (self.chasId, item["card"], item["port"])
+                        for item in self.ports
+                    ]
+                ),
+                "% ",
+                10,
+            )
+            if out.strip()[-1] != "0":
+                return False
+
+        return True
+
+    def tcl_server_logout(self):
+        """
+        Disconnect to tcl server and make sure has been logged out.
+        """
+        self.send_expect("ixDisconnectFromChassis %s" % self.tclServerIP, "%")
+        self.send_expect("ixLogout", "%")
+        self.send_expect("ixDisconnectTclServer %s" % self.tclServerIP, "%")
+
+    def config_port(self, pList):
+        """
+        Configure ports and make them ready for performance validation.
+        """
+        pl = list()
+        for item in pList:
+            ixia_port = "%d %d %d" % (self.chasId, item["card"], item["port"])
+            self.add_tcl_cmd("port setFactoryDefaults %s" % ixia_port)
+            # if the line rate is 100G and we need this port work in 100G mode,
+            # we need to add some configure to make it so.
+            if (
+                int(self.get_line_rate(self.chasId, item).strip()) == 100000
+                and self.enable100g == "enable"
+            ):
+                self.add_tcl_cmd("port config -ieeeL1Defaults 0")
+                self.add_tcl_cmd("port config -autonegotiate false")
+                self.add_tcl_cmd("port config -enableRsFec true")
+                self.add_tcl_cmd(
+                    "port set %d %d %d" % (self.chasId, item["card"], item["port"])
+                )
+
+            pl.append("[list %d %d %d]" % (self.chasId, item["card"], item["port"]))
+
+        self.add_tcl_cmd("set portList [list %s]" % " ".join(pl))
+
+        self.add_tcl_cmd("ixClearTimeStamp portList")
+        self.add_tcl_cmd("ixWritePortsToHardware portList")
+        self.add_tcl_cmd("ixCheckLinkState portList")
+
+    def set_ixia_port_list(self, pList):
+        """
+        Implement ports/streams configuration on specified ports.
+        """
+        self.add_tcl_cmd(
+            "set portList [list %s]"
+            % " ".join(["[list %s]" % ixia_port for ixia_port in pList])
+        )
+
+    def send_ping6(self, pci, mac, ipv6):
+        """
+        Send ping6 packet from IXIA ports.
+        """
+        port = self.pci_to_port(pci)
+        ixia_port = "%d %d %d" % (self.chasId, port["card"], port["port"])
+        self.send_expect("source ./ixTcl1.0/ixiaPing6.tcl", "% ")
+        cmd = 'ping6 "%s" "%s" %s' % (
+            self.ipv6_to_tcl_format(ipv6),
+            self.macToTclFormat(mac),
+            ixia_port,
+        )
+        out = self.send_expect(cmd, "% ", 90)
+        return out
+
+    def ipv6_to_tcl_format(self, ipv6):
+        """
+        Convert normal IPv6 address to IXIA format.
+        """
+        ipv6 = ipv6.upper()
+        singleAddr = ipv6.split(":")
+        if "" == singleAddr[0]:
+            singleAddr = singleAddr[1:]
+        if "" in singleAddr:
+            tclFormatAddr = ""
+            addStr = "0:" * (8 - len(singleAddr)) + "0"
+            for i in range(len(singleAddr)):
+                if singleAddr[i] == "":
+                    tclFormatAddr += addStr + ":"
+                else:
+                    tclFormatAddr += singleAddr[i] + ":"
+            tclFormatAddr = tclFormatAddr[0 : len(tclFormatAddr) - 1]
+            return tclFormatAddr
+        else:
+            return ipv6
+
+    def get_ports(self):
+        """
+        API to get ixia ports for dts `ports_info`
+        """
+        plist = list()
+        if self.session is None:
+            return plist
+
+        for p in self.ports:
+            plist.append({"type": "ixia", "pci": "IXIA:%d.%d" % (p["card"], p["port"])})
+        return plist
+
+    def get_ixia_port_pci(self, port_id):
+        ports_info = self.get_ports()
+        pci = ports_info[port_id]["pci"]
+        return pci
+
+    def pci_to_port(self, pci):
+        """
+        Convert IXIA fake pci to IXIA port.
+        """
+        ixia_pci_regex = "IXIA:(\d*).(\d*)"
+        m = re.match(ixia_pci_regex, pci)
+        if m is None:
+            msg = "ixia port not found"
+            self.logger.warning(msg)
+            return {"card": -1, "port": -1}
+
+        return {"card": int(m.group(1)), "port": int(m.group(2))}
+
+    def get_ixia_port_info(self, port):
+        if port == None or port >= len(self.ports):
+            msg = "<{0}> exceed maximum ixia ports".format(port)
+            raise Exception(msg)
+        pci_addr = self.get_ixia_port_pci(port)
+        port_info = self.pci_to_port(pci_addr)
+        return port_info
+
+    def get_ixia_port(self, port):
+        port_info = self.get_ixia_port_info(port)
+        ixia_port = "%d %d %d" % (self.chasId, port_info["card"], port_info["port"])
+        return ixia_port
+
+    def loss(self, portList, ratePercent, delay=5):
+        """
+        Run loss performance test and return loss rate.
+        """
+        rxPortlist, txPortlist = self._configure_everything(portList, ratePercent)
+        return self.get_loss_packet_rate(rxPortlist, txPortlist, delay)
+
+    def get_loss_packet_rate(self, rxPortlist, txPortlist, delay=5):
+        """
+        Get RX/TX packet statistics and calculate loss rate.
+        """
+        time.sleep(delay)
+
+        self.send_expect("ixStopTransmit portList", "%", 10)
+        time.sleep(2)
+        sendNumber = 0
+        for port in txPortlist:
+            self.stat_get_stat_all_stats(port)
+            sendNumber += self.get_frames_sent()
+            time.sleep(0.5)
+
+        self.logger.debug("send :%f" % sendNumber)
+
+        assert sendNumber != 0
+
+        revNumber = 0
+        for port in rxPortlist:
+            self.stat_get_stat_all_stats(port)
+            revNumber += self.get_frames_received()
+        self.logger.debug("rev  :%f" % revNumber)
+
+        return float(sendNumber - revNumber) / sendNumber, sendNumber, revNumber
+
+    def latency(self, portList, ratePercent, delay=5):
+        """
+        Run latency performance test and return latency statistics.
+        """
+        rxPortlist, txPortlist = self._configure_everything(portList, ratePercent, True)
+        return self.get_packet_latency(rxPortlist)
+
+    def get_packet_latency(self, rxPortlist):
+        """
+        Stop IXIA transmit and return latency statistics.
+        """
+        latencyList = []
+        time.sleep(10)
+        self.send_expect("ixStopTransmit portList", "%", 10)
+        for rx_port in rxPortlist:
+            self.pktGroup_get_stat_all_stats(rx_port)
+            latency = {
+                "port": rx_port,
+                "min": self.get_min_latency(),
+                "max": self.get_max_latency(),
+                "average": self.get_average_latency(),
+            }
+            latencyList.append(latency)
+        return latencyList
+
+    def throughput(self, port_list, rate_percent=100, delay=5):
+        """
+        Run throughput performance test and return throughput statistics.
+        """
+        rxPortlist, txPortlist = self._configure_everything(port_list, rate_percent)
+        return self.get_transmission_results(rxPortlist, txPortlist, delay)
+
+    def is_packet_ordered(self, port_list, delay):
+        """
+        This function could be used to check the packets' order whether same as
+        the receive sequence.
+
+        Please notice that this function only support single-stream mode.
+        """
+        port = self.ports[0]
+        ixia_port = "%d %d %d" % (self.chasId, port["card"], port["port"])
+        rxPortlist, txPortlist = self.prepare_port_list(port_list)
+        self.prepare_ixia_for_transmission(txPortlist, rxPortlist)
+        self.send_expect(
+            "port config -receiveMode [expr $::portCapture|$::portRxSequenceChecking|$::portRxModeWidePacketGroup]",
+            "%",
+        )
+        self.send_expect("port config -autonegotiate true", "%")
+        self.send_expect("ixWritePortsToHardware portList", "%")
+        self.send_expect("set streamId 1", "%")
+        self.send_expect("stream setDefault", "%")
+        self.send_expect("ixStartPortPacketGroups %s" % ixia_port, "%")
+        self.send_expect("ixStartTransmit portList", "%")
+        # wait `delay` seconds to make sure link is up
+        self.send_expect("after 1000 * %d" % delay, "%")
+        self.send_expect("ixStopTransmit portList", "%")
+        self.send_expect("ixStopPortPacketGroups %s" % ixia_port, "%")
+        self.send_expect("packetGroupStats get %s 1 1" % ixia_port, "%")
+        self.send_expect("packetGroupStats getGroup 1", "%")
+        self.send_expect(
+            "set reverseSequenceError [packetGroupStats cget -reverseSequenceError]]",
+            "%",
+        )
+        output = self.send_expect("puts $reverseSequenceError", "%")
+        return int(output[:-2])
+
+    def _configure_everything(self, port_list, rate_percent, latency=False):
+        """
+        Prepare and configure IXIA ports for performance test.
+        """
+        rxPortlist, txPortlist = self.prepare_port_list(
+            port_list, rate_percent, latency
+        )
+        self.prepare_ixia_for_transmission(txPortlist, rxPortlist)
+        self.configure_transmission()
+        self.start_transmission()
+        self.clear_tcl_commands()
+        return rxPortlist, txPortlist
+
+    def clear_tcl_commands(self):
+        """
+        Clear all commands in command list.
+        """
+        del self.tcl_cmds[:]
+
+    def start_transmission(self):
+        """
+        Run commands in command list.
+        """
+        fileContent = "\n".join(self.tcl_cmds) + "\n"
+        self.tester.create_file(fileContent, "ixiaConfig.tcl")
+        self.send_expect("source ixiaConfig.tcl", "% ", 75)
+
+    def configure_transmission(self, option=None):
+        """
+        Start IXIA ports transmission.
+        """
+        self.add_tcl_cmd("ixStartTransmit portList")
+
+    def prepare_port_list(self, portList, rate_percent=100, latency=False):
+        """
+        Configure stream and flow on every IXIA ports.
+        """
+        txPortlist = set()
+        rxPortlist = set()
+
+        for subPortList in portList:
+            txPort, rxPort = subPortList[:2]
+            txPortlist.add(txPort)
+            rxPortlist.add(rxPort)
+
+        # port init
+        self.config_port(
+            [self.get_ixia_port_info(port) for port in txPortlist.union(rxPortlist)]
+        )
+
+        # calculate total streams of ports
+        for (txPort, rxPort, pcapFile, option) in portList:
+            if txPort not in list(self.stream_total.keys()):
+                self.stream_total[txPort] = 1
+            else:
+                self.stream_total[txPort] += 1
+
+        # stream/flow setting
+        for (txPort, rxPort, pcapFile, option) in portList:
+            if txPort not in list(self.stream_index.keys()):
+                self.stream_index[txPort] = 1
+            frame_index = self.stream_index[txPort]
+            self.config_stream(
+                pcapFile, option, txPort, rate_percent, frame_index, latency
+            )
+            self.stream_index[txPort] += 1
+        # clear stream ids table
+        self.stream_index.clear()
+        self.stream_total.clear()
+
+        # config stream before packetGroup
+        if latency is not False:
+            for subPortList in portList:
+                txPort, rxPort = subPortList[:2]
+                flow_num = len(self.parse_pcap(pcapFile))
+                self.config_pktGroup_rx(self.get_ixia_port(rxPort))
+                self.config_pktGroup_tx(self.get_ixia_port(txPort))
+        return rxPortlist, txPortlist
+
+    def prepare_ixia_for_transmission(self, txPortlist, rxPortlist):
+        """
+        Clear all statistics and implement configuration to IXIA hardware.
+        """
+        self.add_tcl_cmd("ixClearStats portList")
+        self.set_ixia_port_list([self.get_ixia_port(port) for port in txPortlist])
+        self.add_tcl_cmd("ixWriteConfigToHardware portList")
+        # Wait for changes to take affect and make sure links are up
+        self.add_tcl_cmd("after 1000")
+        for port in txPortlist:
+            self.start_pktGroup(self.get_ixia_port(port))
+        for port in rxPortlist:
+            self.start_pktGroup(self.get_ixia_port(port))
+
+    def hook_transmission_func(self):
+        pass
+
+    def get_transmission_results(self, rx_port_list, tx_port_list, delay=5):
+        """
+        Override this method if you want to change the way of getting results
+        back from IXIA.
+        """
+        time.sleep(delay)
+        bpsRate = 0
+        rate = 0
+        oversize = 0
+        for port in rx_port_list:
+            self.stat_get_rate_stat_all_stats(port)
+            out = self.send_expect("stat cget -framesReceived", "%", 10)
+            rate += int(out.strip())
+            out = self.send_expect("stat cget -bitsReceived", "% ", 10)
+            self.logger.debug("port %d bits rate:" % (port) + out)
+            bpsRate += int(out.strip())
+            out = self.send_expect("stat cget -oversize", "%", 10)
+            oversize += int(out.strip())
+
+        self.logger.debug("Rate: %f Mpps" % (rate * 1.0 / 1000000))
+        self.logger.debug("Mbps rate: %f Mbps" % (bpsRate * 1.0 / 1000000))
+
+        self.hook_transmission_func()
+
+        self.send_expect("ixStopTransmit portList", "%", 30)
+
+        if rate == 0 and oversize > 0:
+            return (bpsRate, oversize)
+        else:
+            return (bpsRate, rate)
+
+    def config_ixia_dcb_init(self, rxPort, txPort):
+        """
+        Configure Ixia for DCB.
+        """
+        self.send_expect("source ./ixTcl1.0/ixiaDCB.tcl", "% ")
+        self.send_expect(
+            "configIxia %d %s"
+            % (
+                self.chasId,
+                " ".join(
+                    [
+                        "%s" % (repr(self.conRelation[port][n]))
+                        for port in [rxPort, txPort]
+                        for n in range(3)
+                    ]
+                ),
+            ),
+            "% ",
+            100,
+        )
+
+    def config_port_dcb(self, direction, tc):
+        """
+        Configure Port for DCB.
+        """
+        self.send_expect("configPort %s %s" % (direction, tc), "% ", 100)
+
+    def config_port_flow_control(self, ports, option):
+        """configure the type of flow control on a port"""
+        if not ports:
+            return
+        #  mac address, default is "01 80 C2 00 00 01"
+        dst_mac = option.get("dst_mac") or '"01 80 C2 00 00 01"'
+        if not dst_mac:
+            return
+        pause_time = option.get("pause_time") or 255
+        flow_ctrl_cmds = [
+            "protocol setDefault",
+            "port config -flowControl true",
+            "port config -flowControlType ieee8023x",
+        ]
+        for port in ports:
+            ixia_port = self.get_ixia_port(port)
+            flow_ctrl_cmds = [
+                # configure a pause control packet.
+                "port set {0}".format(ixia_port),
+                "protocol config -name pauseControl",
+                "pauseControl setDefault",
+                "pauseControl config -pauseControlType ieee8023x",
+                'pauseControl config -da "{0}"'.format(dst_mac),
+                "pauseControl config -pauseTime {0}".format(pause_time),
+                "pauseControl set {0}".format(ixia_port),
+            ]
+        self.add_tcl_cmds(flow_ctrl_cmds)
+
+    def cfgStreamDcb(self, stream, rate, prio, types):
+        """
+        Configure Stream for DCB.
+        """
+        self.send_expect(
+            "configStream %s %s %s %s" % (stream, rate, prio, types), "% ", 100
+        )
+
+    def get_connection_relation(self, dutPorts):
+        """
+        Get the connect relations between DUT and Ixia.
+        """
+        for port in dutPorts:
+            info = self.tester.get_pci(self.tester.get_local_port(port)).split(".")
+            self.conRelation[port] = [
+                int(info[0]),
+                int(info[1]),
+                repr(self.tester.dut.get_mac_address(port).replace(":", " ").upper()),
+            ]
+        return self.conRelation
+
+    def config_pktGroup_rx(self, ixia_port):
+        """
+        Sets the transmit Packet Group configuration of the stream
+        Default streamID is 1
+        """
+        self.add_tcl_cmd("port config -receiveMode $::portRxModeWidePacketGroup")
+        self.add_tcl_cmd("port set %s" % ixia_port)
+        self.add_tcl_cmd("packetGroup setDefault")
+        self.add_tcl_cmd("packetGroup config -latencyControl cutThrough")
+        self.add_tcl_cmd("packetGroup setRx %s" % ixia_port)
+        self.add_tcl_cmd("packetGroup setTx %s 1" % ixia_port)
+
+    def config_pktGroup_tx(self, ixia_port):
+        """
+        Configure tx port pktGroup for latency.
+        """
+        self.add_tcl_cmd("packetGroup setDefault")
+        self.add_tcl_cmd("packetGroup config -insertSignature true")
+        self.add_tcl_cmd("packetGroup setTx %s 1" % ixia_port)
+
+    def start_pktGroup(self, ixia_port):
+        """
+        Start tx port pktGroup for latency.
+        """
+        self.add_tcl_cmd("ixStartPortPacketGroups %s" % ixia_port)
+
+    def pktGroup_get_stat_all_stats(self, port_number):
+        """
+        Stop Packet Group operation on port and get current Packet Group
+        statistics on port.
+        """
+        ixia_port = self.get_ixia_port(port_number)
+        self.send_expect("ixStopPortPacketGroups %s" % ixia_port, "%", 100)
+        self.send_expect("packetGroupStats get %s 0 16384" % ixia_port, "%", 100)
+        self.send_expect("packetGroupStats getGroup 0", "%", 100)
+
+    def close(self):
+        """
+        We first close the tclsh session opened at the beginning,
+        then the SSH session.
+        """
+        if self.isalive():
+            self.send_expect("exit", "# ")
+            super(Ixia, self).close()
+
+    def stat_get_stat_all_stats(self, port_number):
+        """
+        Sends a IXIA TCL command to obtain all the stat values on a given port.
+        """
+        ixia_port = self.get_ixia_port(port_number)
+        command = "stat get statAllStats {0}".format(ixia_port)
+        self.send_expect(command, "% ", 10)
+
+    def prepare_ixia_internal_buffers(self, port_number):
+        """
+        Tells IXIA to prepare the internal buffers were the frames were captured.
+        """
+        ixia_port = self.get_ixia_port(port_number)
+        command = "capture get {0}".format(ixia_port)
+        self.send_expect(command, "% ", 30)
+
+    def stat_get_rate_stat_all_stats(self, port_number):
+        """
+        All statistics of specified IXIA port.
+        """
+        ixia_port = self.get_ixia_port(port_number)
+        command = "stat getRate statAllStats {0}".format(ixia_port)
+        out = self.send_expect(command, "% ", 30)
+        return out
+
+    def ixia_capture_buffer(self, port_number, first_frame, last_frame):
+        """
+        Tells IXIA to load the captured frames into the internal buffers.
+        """
+        ixia_port = self.get_ixia_port(port_number)
+        command = "captureBuffer get {0} {1} {2}".format(
+            ixia_port, first_frame, last_frame
+        )
+        self.send_expect(command, "%", 60)
+
+    def ixia_export_buffer_to_file(self, frames_filename):
+        """
+        Tells IXIA to dump the frames it has loaded in its internal buffer to a
+        text file.
+        """
+        command = "captureBuffer export %s" % frames_filename
+        self.send_expect(command, "%", 30)
+
+    def _stat_cget_value(self, requested_value):
+        """
+        Sends a IXIA TCL command to obtain a given stat value.
+        """
+        command = "stat cget -" + requested_value
+        result = self.send_expect(command, "%", 10)
+        return int(result.strip())
+
+    def _capture_cget_value(self, requested_value):
+        """
+        Sends a IXIA TCL command to capture certain number of packets.
+        """
+        command = "capture cget -" + requested_value
+        result = self.send_expect(command, "%", 10)
+        return int(result.strip())
+
+    def _packetgroup_cget_value(self, requested_value):
+        """
+        Sends a IXIA TCL command to get pktGroup stat value.
+        """
+        command = "packetGroupStats cget -" + requested_value
+        result = self.send_expect(command, "%", 10)
+        return int(result.strip())
+
+    def number_of_captured_packets(self):
+        """
+        Returns the number of packets captured by IXIA on a previously set
+        port. Call self.stat_get_stat_all_stats(port) before.
+        """
+        return self._capture_cget_value("nPackets")
+
+    def get_frames_received(self):
+        """
+        Returns the number of packets captured by IXIA on a previously set
+        port. Call self.stat_get_stat_all_stats(port) before.
+        """
+        if self._stat_cget_value("framesReceived") != 0:
+            return self._stat_cget_value("framesReceived")
+        else:
+            # if the packet size is large than 1518, this line will avoid return
+            # a wrong number
+            return self._stat_cget_value("oversize")
+
+    def get_flow_control_frames(self):
+        """
+        Returns the number of control frames captured by IXIA on a
+        previously set port. Call self.stat_get_stat_all_stats(port) before.
+        """
+        return self._stat_cget_value("flowControlFrames")
+
+    def get_frames_sent(self):
+        """
+        Returns the number of packets sent by IXIA on a previously set
+        port. Call self.stat_get_stat_all_stats(port) before.
+        """
+        return self._stat_cget_value("framesSent")
+
+    def get_transmit_duration(self):
+        """
+        Returns the duration in nanosecs of the last transmission on a
+        previously set port. Call self.stat_get_stat_all_stats(port) before.
+        """
+        return self._stat_cget_value("transmitDuration")
+
+    def get_min_latency(self):
+        """
+        Returns the minimum latency in nanoseconds of the frames in the
+        retrieved capture buffer. Call packetGroupStats get before.
+        """
+        return self._packetgroup_cget_value("minLatency")
+
+    def get_max_latency(self):
+        """
+        Returns the maximum latency in nanoseconds of the frames in the
+        retrieved capture buffer. Call packetGroupStats get before.
+        """
+        return self._packetgroup_cget_value("maxLatency")
+
+    def get_average_latency(self):
+        """
+        Returns the average latency in nanoseconds of the frames in the
+        retrieved capture buffer. Call packetGroupStats get before.
+        """
+        return self._packetgroup_cget_value("averageLatency")
+
+    def _transmission_pre_config(self, port_list, rate_percent, latency=False):
+        """
+        Prepare and configure IXIA ports for performance test. And remove the
+        transmission step in this config sequence.
+
+        This function is set only for function send_number_packets for
+        nic_single_core_perf test case use
+        """
+        rxPortlist, txPortlist = self.prepare_port_list(
+            port_list, rate_percent, latency
+        )
+        self.prepare_ixia_for_transmission(txPortlist, rxPortlist)
+        self.start_transmission()
+        self.clear_tcl_commands()
+        return rxPortlist, txPortlist
+
+    def send_number_packets(self, portList, ratePercent, packetNum):
+        """
+        Configure ixia to send fixed number of packets
+        Note that this function is only set for test_suite nic_single_core_perf,
+        Not for common use
+        """
+        rxPortlist, txPortlist = self._transmission_pre_config(portList, ratePercent)
+
+        self.send_expect("stream config -numFrames %s" % packetNum, "%", 5)
+        self.send_expect("stream config -dma stopStream", "%", 5)
+        for txPort in txPortlist:
+            ixia_port = self.get_ixia_port(txPort)
+            self.send_expect("stream set %s 1" % ixia_port, "%", 5)
+
+        self.send_expect("ixWritePortsToHardware portList", "%", 5)
+        self.send_expect("ixClearStats portList", "%", 5)
+        self.send_expect("ixStartTransmit portList", "%", 5)
+        time.sleep(10)
+
+        rxPackets = 0
+        for port in txPortlist:
+            self.stat_get_stat_all_stats(port)
+            txPackets = self.get_frames_sent()
+            while txPackets != packetNum:
+                time.sleep(10)
+                self.stat_get_stat_all_stats(port)
+                txPackets = self.get_frames_sent()
+            rxPackets += self.get_frames_received()
+        self.logger.debug("Received packets :%s" % rxPackets)
+
+        return rxPackets
+
+    # ---------------------------------------------------------
+    # extend methods for pktgen subclass `IxiaPacketGenerator
+    # ---------------------------------------------------------
+    def disconnect(self):
+        """quit from ixia server"""
+        pass
+
+    def start(self, **run_opt):
+        """start ixia ports"""
+        self.configure_transmission(run_opt)
+        self.start_transmission()
+
+    def remove_all_streams(self):
+        """delete all streams on all ixia ports"""
+        if not self.ports:
+            return
+        for item in self.ports:
+            cmd = "port reset {0} {1} {2}".format(
+                self.chasId, item["card"], item["port"]
+            )
+            self.send_expect(cmd, "%", 10)
+
+    def reset(self, ports=None):
+        """reset ixia configuration for ports"""
+        pass
+
+    def clear_tcl_buffer(self):
+        """clear tcl commands buffer"""
+        self.tcl_cmds = []
+
+    def clear_stats(self):
+        pass
+
+    def stop_transmit(self):
+        """
+        Stop IXIA transmit
+        """
+        time.sleep(2)
+        self.send_expect("ixStopTransmit portList", "%", 40)
+
+    def get_latency_stat(self, port_list):
+        """
+        get latency statistics.
+        """
+        stats = {}
+        for port in port_list:
+            self.pktGroup_get_stat_all_stats(port)
+            stats[port] = {
+                "average": self.get_average_latency(),
+                "total_max": self.get_max_latency(),
+                "total_min": self.get_min_latency(),
+            }
+        return stats
+
+    def get_loss_stat(self, port_list):
+        """
+        Get RX/TX packet statistics.
+        """
+        stats = {}
+        for port in port_list:
+            self.stat_get_stat_all_stats(port)
+            stats[port] = {
+                "ibytes": 0,
+                "ierrors": 0,
+                "ipackets": self.get_frames_received(),
+                "obytes": 0,
+                "oerrors": 0,
+                "opackets": self.get_frames_sent(),
+                "rx_bps": 0,
+                "rx_pps": 0,
+                "tx_bps": 0,
+                "tx_pps": 0,
+            }
+            time.sleep(0.5)
+        return stats
+
+    def get_throughput_stat(self, port_list):
+        """
+        Get RX transmit rate.
+        """
+        stats = {}
+        for port in port_list:
+            self.stat_get_rate_stat_all_stats(port)
+            out = self.send_expect("stat cget -framesReceived", "%", 10)
+            rate = int(out.strip())
+            out = self.send_expect("stat cget -bitsReceived", "% ", 10)
+            bpsRate = int(out.strip())
+            out = self.send_expect("stat cget -oversize", "%", 10)
+            oversize = int(out.strip())
+            rate = oversize if rate == 0 and oversize > 0 else rate
+
+            stats[port] = {
+                "ibytes": 0,
+                "ierrors": 0,
+                "ipackets": 0,
+                "obytes": 0,
+                "oerrors": 0,
+                "opackets": 0,
+                "rx_bps": bpsRate,
+                "rx_pps": rate,
+                "tx_bps": 0,
+                "tx_pps": 0,
+            }
+
+        return stats
+
+    def get_stats(self, ports, mode):
+        """
+        get statistics of custom mode
+        """
+        methods = {
+            "throughput": self.get_throughput_stat,
+            "loss": self.get_loss_stat,
+            "latency": self.get_latency_stat,
+        }
+        if mode not in list(methods.keys()):
+            msg = "not support mode <{0}>".format(mode)
+            raise Exception(msg)
+        # get custom mode stat
+        func = methods.get(mode)
+        stats = func(ports)
+
+        return stats
+
+
+class IxiaPacketGenerator(PacketGenerator):
+    """
+    Ixia packet generator
+    """
+
+    def __init__(self, tester):
+        super(IxiaPacketGenerator, self).__init__(tester)
+        # ixia management
+        self.pktgen_type = PKTGEN_IXIA
+        self._conn = None
+        # ixia configuration information of dts
+        conf_inst = self._get_generator_conf_instance()
+        self.conf = conf_inst.load_pktgen_config()
+        # ixia port configuration
+        self._traffic_opt = {}
+        self._traffic_ports = []
+        self._ports = []
+        self._rx_ports = []
+        # statistics management
+        self.runtime_stats = {}
+        # check configuration options
+        self.options_keys = ["txmode", "ip", "vlan", "transmit_mode", "rate"]
+        self.ip_keys = [
+            "start",
+            "end",
+            "action",
+            "step",
+            "mask",
+        ]
+        self.vlan_keys = [
+            "start",
+            "end",
+            "action",
+            "step",
+            "count",
+        ]
+
+        self.tester = tester
+
+    def get_ports(self):
+        """only used for ixia packet generator"""
+        return self._conn.get_ports()
+
+    def _prepare_generator(self):
+        """start ixia server"""
+        try:
+            self._connect(self.tester, self.conf)
+        except Exception as e:
+            msg = "failed to connect to ixia server"
+            raise Exception(msg)
+
+    def _connect(self, tester, conf):
+        # initialize ixia class
+        self._conn = Ixia(tester, conf, self.logger)
+        for p in self._conn.get_ports():
+            self._ports.append(p)
+
+        self.logger.debug(self._ports)
+
+    def _disconnect(self):
+        """
+        disconnect with ixia server
+        """
+        try:
+            self._remove_all_streams()
+            self._conn.disconnect()
+        except Exception as e:
+            msg = "Error disconnecting: %s" % e
+            self.logger.error(msg)
+        self._conn = None
+
+    def _get_port_pci(self, port_id):
+        """
+        get ixia port pci address
+        """
+        for pktgen_port_id, info in enumerate(self._ports):
+            if pktgen_port_id == port_id:
+                _pci = info.get("pci")
+                return _pci
+        else:
+            return None
+
+    def _get_gen_port(self, pci):
+        """
+        get port management id of the packet generator
+        """
+        for pktgen_port_id, info in enumerate(self._ports):
+            _pci = info.get("pci")
+            if _pci == pci:
+                return pktgen_port_id
+        else:
+            return -1
+
+    def _is_gen_port(self, pci):
+        """
+        check if a pci address is managed by the packet generator
+        """
+        for name, _port_obj in self._conn.ports.items():
+            _pci = _port_obj.info["pci_addr"]
+            self.logger.debug((_pci, pci))
+            if _pci == pci:
+                return True
+        else:
+            return False
+
+    def _get_ports(self):
+        """
+        Return self ports information
+        """
+        ports = []
+        for idx in range(len(self._ports)):
+            ports.append("IXIA:%d" % idx)
+        return ports
+
+    @property
+    def _vm_conf(self):
+        # close it and wait for more discussion about pktgen framework
+        return None
+        conf = {}
+        # get the subnet range of src and dst ip
+        if "ip_src" in self.conf:
+            conf["src"] = {}
+            ip_src = self.conf["ip_src"]
+            ip_src_range = ip_src.split("-")
+            conf["src"]["start"] = ip_src_range[0]
+            conf["src"]["end"] = ip_src_range[1]
+
+        if "ip_dst" in self.conf:
+            conf["dst"] = {}
+            ip_dst = self.conf["ip_dst"]
+            ip_dst_range = ip_dst.split("-")
+            conf["dst"]["start"] = ip_dst_range[0]
+            conf["dst"]["end"] = ip_dst_range[1]
+
+        return conf if conf else None
+
+    def _clear_streams(self):
+        """clear streams in `PacketGenerator`"""
+        # if streams has been attached, remove them from trex server.
+        self._remove_all_streams()
+
+    def _remove_all_streams(self):
+        """
+        remove all stream deployed on the packet generator
+        """
+        if not self.get_streams():
+            return
+        self._conn.remove_all_streams()
+
+    def _get_port_features(self, port_id):
+        """get ports features"""
+        ports = self._conn.ports
+        if port_id not in ports:
+            return None
+        features = self._conn.ports[port_id].get_formatted_info()
+
+        return features
+
+    def _is_support_flow_control(self, port_id):
+        """check if a port support flow control"""
+        features = self._get_port_features(port_id)
+        if not features or features.get("fc_supported") == "no":
+            return False
+        else:
+            return True
+
+    def _preset_ixia_port(self):
+        """set ports flow_ctrl attribute"""
+        rx_ports = self._rx_ports
+        flow_ctrl_opt = self._traffic_opt.get("flow_control")
+        if not flow_ctrl_opt:
+            return
+        # flow control of port running trex traffic
+        self._conn.config_port_flow_control(rx_ports, flow_ctrl_opt)
+
+    def _throughput_stats(self, stream, stats):
+        """convert ixia throughput statistics format to dts PacketGenerator format"""
+        # tx packet
+        tx_port_id = stream["tx_port"]
+        port_stats = stats.get(tx_port_id)
+        if not port_stats:
+            msg = "failed to get tx_port {0} statistics".format(tx_port_id)
+            raise Exception(msg)
+        tx_bps = port_stats.get("tx_bps")
+        tx_pps = port_stats.get("tx_pps")
+        msg = [
+            "Tx Port %d stats: " % (tx_port_id),
+            "tx_port: %d,  tx_bps: %f, tx_pps: %f " % (tx_port_id, tx_bps, tx_pps),
+        ]
+        self.logger.debug(pformat(port_stats))
+        self.logger.debug(os.linesep.join(msg))
+        # rx bps/pps
+        rx_port_id = stream["rx_port"]
+        port_stats = stats.get(rx_port_id)
+        if not port_stats:
+            msg = "failed to get rx_port {0} statistics".format(rx_port_id)
+            raise Exception(msg)
+        rx_bps = port_stats.get("rx_bps")
+        rx_pps = port_stats.get("rx_pps")
+        msg = [
+            "Rx Port %d stats: " % (rx_port_id),
+            "rx_port: %d,  rx_bps: %f, rx_pps: %f" % (rx_port_id, rx_bps, rx_pps),
+        ]
+
+        self.logger.debug(pformat(port_stats))
+        self.logger.debug(os.linesep.join(msg))
+
+        return rx_bps, rx_pps
+
+    def _loss_rate_stats(self, stream, stats):
+        """convert ixia loss rate statistics format to dts PacketGenerator format"""
+        # tx packet
+        port_id = stream.get("tx_port")
+        if port_id in list(stats.keys()):
+            port_stats = stats[port_id]
+        else:
+            msg = "port {0} statistics is not found".format(port_id)
+            self.logger.error(msg)
+            return None
+        msg = "Tx Port %d stats: " % (port_id)
+        self.logger.debug(msg)
+        opackets = port_stats["opackets"]
+        # rx packet
+        port_id = stream.get("rx_port")
+        port_stats = stats[port_id]
+        msg = "Rx Port %d stats: " % (port_id)
+        self.logger.debug(msg)
+        ipackets = port_stats["ipackets"]
+
+        return opackets, ipackets
+
+    def _latency_stats(self, stream, stats):
+        """convert ixia latency statistics format to dts PacketGenerator format"""
+        port_id = stream.get("tx_port")
+        if port_id in list(stats.keys()):
+            port_stats = stats[port_id]
+        else:
+            msg = "port {0} latency stats is not found".format(port_id)
+            self.logger.error(msg)
+            return None
+
+        latency_stats = {
+            "min": port_stats.get("total_min"),
+            "max": port_stats.get("total_max"),
+            "average": port_stats.get("average"),
+        }
+
+        return latency_stats
+
+    def send_ping6(self, pci, mac, ipv6):
+        """Send ping6 packet from IXIA ports."""
+        return self._conn.send_ping6(pci, mac, ipv6)
+
+    ##########################################################################
+    #
+    #  class ``PacketGenerator`` abstract methods should be implemented here
+    #
+    ##########################################################################
+    def _prepare_transmission(self, stream_ids=[], latency=False):
+        """add one/multiple streams in one/multiple ports"""
+        port_config = {}
+
+        for stream_id in stream_ids:
+            stream = self._get_stream(stream_id)
+            tx_port = stream.get("tx_port")
+            rx_port = stream.get("rx_port")
+            pcap_file = stream.get("pcap_file")
+            # save port id list
+            if tx_port not in self._traffic_ports:
+                self._traffic_ports.append(tx_port)
+            if rx_port not in self._traffic_ports:
+                self._traffic_ports.append(rx_port)
+            if rx_port not in self._rx_ports:
+                self._rx_ports.append(rx_port)
+            # set all streams in one port to do batch configuration
+            options = stream["options"]
+            if tx_port not in list(port_config.keys()):
+                port_config[tx_port] = []
+            config = {}
+            config.update(options)
+            # In pktgen, all streams flow control option are the same by design.
+            self._traffic_opt["flow_control"] = options.get("flow_control") or {}
+            # if vm config by pktgen config file, set it here to take the place
+            # of setting on suite
+            if self._vm_conf:  # TBD, remove this process later
+                config["fields_config"] = self._vm_conf
+            # get stream rate percent
+            stream_config = options.get("stream_config")
+            rate_percent = stream_config.get("rate")
+            # set port list input parameter of ixia class
+            ixia_option = [tx_port, rx_port, pcap_file, options]
+            port_config[tx_port].append(ixia_option)
+
+        if not port_config:
+            msg = "no stream options for ixia packet generator"
+            raise Exception(msg)
+        # -------------------------------------------------------------------
+        port_lists = []
+        for port_id, option in port_config.items():
+            port_lists += option
+        self._conn.clear_tcl_buffer()
+        rxPortlist, txPortlist = self._conn.prepare_port_list(
+            port_lists, rate_percent or 100, latency
+        )
+        self._conn.prepare_ixia_for_transmission(txPortlist, rxPortlist)
+        # preset port status before running traffic
+        self._preset_ixia_port()
+
+    def _start_transmission(self, stream_ids, options={}):
+        # get rate percentage
+        rate_percent = options.get("rate")
+        if rate_percent:
+            msg = (
+                "{0} only support set rate percent in streams, "
+                "current run traffic with stream rate percent"
+            ).format(self.pktgen_type)
+            self.logger.warning(msg)
+        # run ixia server
+        try:
+            ###########################################
+            # Start traffic on port(s)
+            self.logger.info("begin traffic ......")
+            run_opt = {
+                "ports": self._traffic_ports,
+                "mult": rate_percent,
+                "force": True,
+            }
+            self._conn.start(**run_opt)
+        except Exception as e:
+            self.logger.error(e)
+
+    def _stop_transmission(self, stream_id):
+        # using ixia server command
+        if self._traffic_ports:
+            self._conn.stop_transmit()
+            self.logger.info("traffic completed. ")
+
+    def _retrieve_port_statistic(self, stream_id, mode):
+        """ixia traffic statistics"""
+        stats = self._conn.get_stats(self._traffic_ports, mode)
+        stream = self._get_stream(stream_id)
+        self.logger.debug(pformat(stream))
+        self.logger.debug(pformat(stats))
+        if mode == "throughput":
+            return self._throughput_stats(stream, stats)
+        elif mode == "loss":
+            return self._loss_rate_stats(stream, stats)
+        elif mode == "latency":
+            return self._latency_stats(stream, stats)
+        else:
+            msg = "not support mode <{0}>".format(mode)
+            raise Exception(msg)
+
+    def _check_options(self, opts={}):
+        # remove it to upper level class and wait for more discussion about
+        # pktgen framework
+        return True
+        for key in opts:
+            if key in self.options_keys:
+                if key == "ip":
+                    ip = opts["ip"]
+                    for ip_key in ip:
+                        if not ip_key in self.ip_keys:
+                            msg = " %s is invalid ip option" % ip_key
+                            self.logger.info(msg)
+                            return False
+                        if key == "action":
+                            if not ip[key] == "inc" or not ip[key] == "dec":
+                                msg = " %s is invalid ip action" % ip[key]
+                                self.logger.info(msg)
+                                return False
+                elif key == "vlan":
+                    vlan = opts["vlan"]
+                    for vlan_key in vlan:
+                        if not vlan_key in self.vlan_keys:
+                            msg = " %s is invalid vlan option" % vlan_key
+                            self.logger.info(msg)
+                            return False
+                        if key == "action":
+                            if not vlan[key] == "inc" or not ip[key] == "dec":
+                                msg = " %s is invalid vlan action" % vlan[key]
+                                self.logger.info(msg)
+                                return False
+            else:
+                msg = " %s is invalid option" % key
+                self.logger.info(msg)
+                return False
+        return True
+
+    def quit_generator(self):
+        """close ixia session"""
+        if self._conn is not None:
+            self._disconnect()
+        return
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 07/18] dts: merge DTS framework/pktgen_ixia_network.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (5 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 06/18] dts: merge DTS framework/pktgen_ixia.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 08/18] dts: merge DTS framework/pktgen_trex.py " Juraj Linkeš
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/pktgen_ixia_network.py | 225 +++++++++++++++++++++++++++
 1 file changed, 225 insertions(+)
 create mode 100644 dts/framework/pktgen_ixia_network.py

diff --git a/dts/framework/pktgen_ixia_network.py b/dts/framework/pktgen_ixia_network.py
new file mode 100644
index 0000000000..270fab0113
--- /dev/null
+++ b/dts/framework/pktgen_ixia_network.py
@@ -0,0 +1,225 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+import os
+import time
+import traceback
+from pprint import pformat
+
+from .pktgen_base import PKTGEN_IXIA_NETWORK, PacketGenerator
+
+
+class IxNetworkPacketGenerator(PacketGenerator):
+    """
+    ixNetwork packet generator
+    """
+
+    def __init__(self, tester):
+        super(IxNetworkPacketGenerator, self).__init__(tester)
+        self.pktgen_type = PKTGEN_IXIA_NETWORK
+        self._conn = None
+        # ixNetwork configuration information of dts
+        conf_inst = self._get_generator_conf_instance()
+        self.conf = conf_inst.load_pktgen_config()
+        # ixNetwork port configuration
+        self._traffic_ports = []
+        self._ports = []
+        self._rx_ports = []
+
+    def get_ports(self):
+        """used for ixNetwork packet generator"""
+        return self._conn.get_ports()
+
+    def _prepare_generator(self):
+        """connect with ixNetwork api server"""
+        try:
+            self._connect(self.conf)
+        except Exception as e:
+            msg = "failed to connect to ixNetwork api server"
+            raise Exception(msg)
+
+    def _connect(self, conf):
+        # initialize ixNetwork class
+        from framework.ixia_network import IxNetwork
+
+        self._conn = IxNetwork(self.pktgen_type, conf, self.logger)
+        for p in self._conn.get_ports():
+            self._ports.append(p)
+
+        self.logger.debug(self._ports)
+
+    def _disconnect(self):
+        """
+        disconnect with ixNetwork api server
+        """
+        try:
+            self._remove_all_streams()
+            self._conn.disconnect()
+        except Exception as e:
+            msg = "Error disconnecting: %s" % e
+            self.logger.error(msg)
+        self._conn = None
+
+    def quit_generator(self):
+        """close ixNetwork session"""
+        if self._conn is not None:
+            self._disconnect()
+
+    def _get_port_pci(self, port_id):
+        """
+        get ixNetwork port pci address
+        """
+        for pktgen_port_id, info in enumerate(self._ports):
+            if pktgen_port_id == port_id:
+                _pci = info.get("pci")
+                return _pci
+        else:
+            return None
+
+    def _get_gen_port(self, pci):
+        """
+        get port management id of the packet generator
+        """
+        for pktgen_port_id, info in enumerate(self._ports):
+            _pci = info.get("pci")
+            if _pci == pci:
+                return pktgen_port_id
+        else:
+            return -1
+
+    def _is_gen_port(self, pci):
+        """
+        check if a pci address is managed by the packet generator
+        """
+        for name, _port_obj in self._conn.ports.items():
+            _pci = _port_obj.info["pci_addr"]
+            self.logger.debug((_pci, pci))
+            if _pci == pci:
+                return True
+        else:
+            return False
+
+    def _get_ports(self):
+        """
+        Return self ports information
+        """
+        ports = []
+        for idx in range(len(self._ports)):
+            ports.append("IXIA:%d" % idx)
+        return ports
+
+    def send_ping6(self, pci, mac, ipv6):
+        """Send ping6 packet from IXIA ports."""
+        return self._conn.send_ping6(pci, mac, ipv6)
+
+    def _clear_streams(self):
+        """clear streams in `PacketGenerator`"""
+        # if streams has been attached, remove them from ixNetwork api server.
+        self._remove_all_streams()
+
+    def _remove_all_streams(self):
+        """
+        remove all stream deployed on the packet generator
+        """
+        if not self.get_streams():
+            return
+
+    def _check_options(self, opts={}):
+        return True
+
+    def _retrieve_port_statistic(self, stream_id, mode):
+        """ixNetwork traffic statistics"""
+        stats = self._conn.get_stats(self._traffic_ports, mode)
+        stream = self._get_stream(stream_id)
+        self.logger.debug(pformat(stream))
+        self.logger.debug(pformat(stats))
+        if mode == "rfc2544":
+            return stats
+        else:
+            msg = "not support mode <{0}>".format(mode)
+            raise Exception(msg)
+
+    ##########################################################################
+    #
+    #  class ``PacketGenerator`` abstract methods should be implemented here
+    #
+    ##########################################################################
+    def _prepare_transmission(self, stream_ids=[], latency=False):
+        """add one/multiple streams in one/multiple ports"""
+        port_config = {}
+
+        for stream_id in stream_ids:
+            stream = self._get_stream(stream_id)
+            tx_port = stream.get("tx_port")
+            rx_port = stream.get("rx_port")
+            pcap_file = stream.get("pcap_file")
+            # save port id list
+            if tx_port not in self._traffic_ports:
+                self._traffic_ports.append(tx_port)
+            if rx_port not in self._traffic_ports:
+                self._traffic_ports.append(rx_port)
+            if rx_port not in self._rx_ports:
+                self._rx_ports.append(rx_port)
+            # set all streams in one port to do batch configuration
+            options = stream["options"]
+            if tx_port not in list(port_config.keys()):
+                port_config[tx_port] = []
+            config = {}
+            config.update(options)
+            # get stream rate percent
+            stream_config = options.get("stream_config")
+            rate_percent = stream_config.get("rate")
+            # set port list input parameter of ixNetwork class
+            ixia_option = [tx_port, rx_port, pcap_file, options]
+            port_config[tx_port].append(ixia_option)
+
+        self.rate_percent = rate_percent
+        if not port_config:
+            msg = "no stream options for ixNetwork packet generator"
+            raise Exception(msg)
+
+        port_lists = []
+        for port_id, option in port_config.items():
+            port_lists += option
+        self._conn.prepare_ixia_network_stream(port_lists)
+
+    def _start_transmission(self, stream_ids, options={}):
+        # run ixNetwork api server
+        try:
+            # Start traffic on port(s)
+            self.logger.info("begin traffic ......")
+            self._conn.start(options)
+        except Exception as e:
+            self.logger.error(traceback.format_exc())
+            self.logger.error(e)
+
+    def _stop_transmission(self, stream_id):
+        if self._traffic_ports:
+            self.logger.info("traffic completed. ")
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 08/18] dts: merge DTS framework/pktgen_trex.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (6 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 07/18] dts: merge DTS framework/pktgen_ixia_network.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 09/18] dts: merge DTS framework/ssh_connection.py " Juraj Linkeš
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/pktgen_trex.py | 908 +++++++++++++++++++++++++++++++++++
 1 file changed, 908 insertions(+)
 create mode 100644 dts/framework/pktgen_trex.py

diff --git a/dts/framework/pktgen_trex.py b/dts/framework/pktgen_trex.py
new file mode 100644
index 0000000000..ebc16f088e
--- /dev/null
+++ b/dts/framework/pktgen_trex.py
@@ -0,0 +1,908 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import logging
+import os
+import sys
+import time
+from pprint import pformat
+
+from scapy.layers.inet import IP
+from scapy.layers.l2 import Ether
+
+from .pktgen_base import (
+    PKTGEN,
+    PKTGEN_TREX,
+    TRANSMIT_CONT,
+    TRANSMIT_M_BURST,
+    TRANSMIT_S_BURST,
+    PacketGenerator,
+)
+
+
+class TrexConfigVm(object):
+    """
+    config one stream vm format of trex
+    """
+
+    def __init__(self):
+        from trex_stl_lib.api import ipv4_str_to_num, is_valid_ipv4_ret, mac2str
+
+        self.ipv4_str_to_num = ipv4_str_to_num
+        self.is_valid_ipv4_ret = is_valid_ipv4_ret
+        self.mac2str = mac2str
+
+    def _mac_var(self, fv_name, mac_start, mac_end, step, mode):
+        """
+        create mac address vm format of trex
+        """
+        _mac_start = self.ipv4_str_to_num(self.mac2str(mac_start)[2:])
+        _mac_end = self.ipv4_str_to_num(self.mac2str(mac_end)[2:])
+        if mode == "inc" or mode == "dec":
+            min_value = _mac_start
+            max_value = _mac_end
+        elif mode == "random":
+            max_value = 0xFFFFFFFF
+            min_value = 0
+        add_val = 0
+
+        var = [
+            {
+                "name": fv_name,
+                "min_value": min_value,
+                "max_value": max_value,
+                "size": 4,
+                "step": step,
+                "op": mode,
+            },
+            {"write": {"add_val": add_val, "offset_fixup": 2}},
+        ]
+
+        return var
+
+    def _ip_vm_var(self, fv_name, ip_start, ip_end, step, mode):
+        """
+        create ip address vm format of trex
+        """
+        _ip_start = self.ipv4_str_to_num(self.is_valid_ipv4_ret(ip_start))
+        _ip_end = self.ipv4_str_to_num(self.is_valid_ipv4_ret(ip_end))
+        _step = (
+            self.ipv4_str_to_num(self.is_valid_ipv4_ret(step))
+            if isinstance(step, str)
+            else step
+        )
+        if mode == "inc" or mode == "dec":
+            min_value = _ip_start
+            max_value = _ip_end
+        elif mode == "random":
+            max_value = 0xFFFFFFFF
+            min_value = 0
+        add_val = 0
+
+        var = [
+            {
+                "name": fv_name,
+                "min_value": min_value,
+                "max_value": max_value,
+                "size": 4,
+                "step": _step,
+                "op": mode,
+            },
+            {"write": {"add_val": add_val}, "fix_chksum": {}},
+        ]
+
+        return var
+
+    def config_trex_vm(self, option):
+        """
+        config one stream vm
+        """
+        vm_var = {}
+        ###################################################################
+        # mac inc/dec/random
+        if "mac" in option:
+            for name, config in option["mac"].items():
+                mac_start = config.get("start") or "00:00:00:00:00:00"
+                mac_end = config.get("end") or "FF:FF:FF:FF:FF:FF"
+                step = config.get("step") or 1
+                mode = config.get("action") or "inc"
+                # -----------------
+                fv_name = "Ethernet.{0}".format(name)
+                # layer/field name
+                vm_var[fv_name] = self._mac_var(fv_name, mac_start, mac_end, step, mode)
+        ###################################################################
+        # src ip mask inc/dec/random
+        if "ip" in option:
+            for name, config in option["ip"].items():
+                ip_start = config.get("start") or "0.0.0.1"
+                ip_end = config.get("end") or "0.0.0.255"
+                step = config.get("step") or 1
+                mode = config.get("action") or "inc"
+                # -----------------
+                fv_name = "IP.{0}".format(name)
+                # layer/field name
+                vm_var[fv_name] = self._ip_vm_var(fv_name, ip_start, ip_end, step, mode)
+        ###################################################################
+        #  merge var1/var2/random/cache into one method
+        ###################################################################
+        # src ip mask inc/dec/random
+        if "port" in option:
+            for name, config in option["port"].items():
+                protocol = config.get("protocol") or "UDP"
+                port_start = config.get("start") or 1
+                port_end = config.get("end") or 255
+                step = config.get("step") or 1
+                mode = config.get("action") or "inc"
+                # -----------------
+                fv_name = "{0}.{1}".format(protocol.upper(), name)
+                # layer/field name
+                vm_var[fv_name] = {
+                    "name": fv_name,
+                    "min_value": port_start,
+                    "max_value": port_end,
+                    "size": 2,
+                    "step": step,
+                    "op": mode,
+                }
+        ###################################################################
+        # vlan field inc/dec/random
+        if "vlan" in option:
+            for name, config in option["vlan"].items():
+                vlan_start = config.get("start") if config.get("start") != None else 0
+                vlan_end = config.get("end") or 256
+                step = config.get("step") or 1
+                mode = config.get("action") or "inc"
+                # -----------------
+                fv_name = "802|1Q:{0}.vlan".format(name)
+                # vlan layer/field name
+                vm_var[fv_name] = {
+                    "name": fv_name,
+                    "min_value": vlan_start,
+                    "max_value": vlan_end,
+                    "size": 2,
+                    "step": step,
+                    "op": mode,
+                }
+        ###################################################################
+        # payload change with custom sizes
+        if "pkt_size" in option:
+            # note:
+            # when using mixed stream, which have different sizes
+            # this will be forbidden
+            step = 1
+            mode = "random"
+            min_pkt_size = option["pkt_size"]["start"]
+            max_pkt_size = option["pkt_size"]["end"]
+            # -----------------
+            l3_len_fix = -len(Ether())
+            l4_len_fix = l3_len_fix - len(IP())
+
+            var = {
+                "name": "fv_rand",
+                # src ip increase with a range
+                "min_value": min_pkt_size - 4,
+                "max_value": max_pkt_size - 4,
+                "size": 2,
+                "step": step,
+                "op": mode,
+            }
+
+            vm_var = {
+                "IP.len": [
+                    var,
+                    {"write": {"add_val": l3_len_fix}, "trim": {}, "fix_chksum": {}},
+                ],
+                "UDP.len": [
+                    var,
+                    {"write": {"add_val": l4_len_fix}, "trim": {}, "fix_chksum": {}},
+                ],
+            }
+
+        return vm_var
+
+
+class TrexConfigStream(object):
+    def __init__(self):
+        from trex_stl_lib.api import (
+            STLVM,
+            STLFlowLatencyStats,
+            STLPktBuilder,
+            STLProfile,
+            STLStream,
+            STLStreamDstMAC_PKT,
+            STLTXCont,
+            STLTXMultiBurst,
+            STLTXSingleBurst,
+        )
+
+        # set trex class
+        self.STLStream = STLStream
+        self.STLPktBuilder = STLPktBuilder
+        self.STLProfile = STLProfile
+        self.STLVM = STLVM
+        self.STLTXCont = STLTXCont
+        self.STLTXSingleBurst = STLTXSingleBurst
+        self.STLTXMultiBurst = STLTXMultiBurst
+        self.STLFlowLatencyStats = STLFlowLatencyStats
+        self.STLStreamDstMAC_PKT = STLStreamDstMAC_PKT
+
+    def _set_var_default_value(self, config):
+        default = {
+            "init_value": None,
+            "min_value": 0,
+            "max_value": 255,
+            "size": 4,
+            "step": 1,
+        }
+        for name, value in default.items():
+            if name not in config:
+                config[name] = value
+
+    def _preset_layers(self, vm_var, configs):
+        """
+        configure stream behavior on pcap format
+        """
+        msg = "layer <{0}> field name <{1}> is not defined".format
+        fv_names = []
+        fix_chksum = False
+        for layer, _config in configs.items():
+            # set default value
+            if isinstance(_config, (tuple, list)):
+                config = _config[0]
+                op_config = _config[1]
+            else:
+                config = _config
+                op_config = None
+
+            name = config.get("name")
+            if not name:
+                error = msg(layer, name)
+                raise Exception(error)
+
+            self._set_var_default_value(config)
+            # different fields with a range (relevance variables)
+            if isinstance(layer, (tuple, list)):
+                vm_var.tuple_var(**config)
+                for offset in layer:
+                    fv_name = (
+                        name + ".ip" if offset.startswith("IP") else name + ".port"
+                    )
+                    _vars = {"fv_name": fv_name, "pkt_offset": offset}
+                    if op_config and "write" in op_config:
+                        _vars.update(op_config["write"])
+
+                    if fv_name not in fv_names:
+                        fv_names.append(fv_name)
+                        vm_var.write(**_vars)
+            # different fields with a range (independent variable)
+            else:
+                if name not in fv_names:
+                    fv_names.append(name)
+                    vm_var.var(**config)
+                # write behavior in field
+                _vars = {"fv_name": name, "pkt_offset": layer}
+                if op_config and "write" in op_config:
+                    _vars.update(op_config["write"])
+                vm_var.write(**_vars)
+
+            # Trim the packet size by the stream variable size
+            if op_config and "trim" in op_config:
+                vm_var.trim(name)
+            # set VM as cached with a cache size
+            if op_config and "set_cached" in op_config:
+                vm_var.set_cached(op_config["set_cached"])
+            # Fix IPv4 header checksum
+            if op_config and "fix_chksum" in op_config:
+                fix_chksum = True
+
+        # protocol type
+        if fix_chksum:
+            vm_var.fix_chksum()
+
+    def _create_stream(self, _pkt, stream_opt, vm=None, flow_stats=None):
+        """
+        create trex stream
+        """
+        isg = stream_opt.get("isg") or 0.5
+        mode = stream_opt.get("transmit_mode") or TRANSMIT_CONT
+        txmode_opt = stream_opt.get("txmode") or {}
+        pps = txmode_opt.get("pps")
+        # Continuous mode
+        if mode == TRANSMIT_CONT:
+            mode_inst = self.STLTXCont(pps=pps)
+        # Single burst mode
+        elif mode == TRANSMIT_S_BURST:
+            total_pkts = txmode_opt.get("total_pkts") or 32
+            mode_inst = self.STLTXSingleBurst(pps=pps, total_pkts=total_pkts)
+        # Multi-burst mode
+        elif mode == TRANSMIT_M_BURST:
+            burst_pkts = txmode_opt.get("burst_pkts") or 32
+            bursts_count = txmode_opt.get("bursts_count") or 2
+            ibg = txmode_opt.get("ibg") or 10
+            mode_inst = self.STLTXMultiBurst(
+                pkts_per_burst=burst_pkts, count=bursts_count, ibg=ibg
+            )
+        else:
+            msg = "not support format {0}".format(mode)
+            raise Exception(msg)
+
+        pkt = self.STLPktBuilder(pkt=_pkt, vm=vm)
+        _stream = self.STLStream(
+            packet=pkt,
+            mode=mode_inst,
+            isg=isg,
+            flow_stats=flow_stats,
+            mac_dst_override_mode=self.STLStreamDstMAC_PKT,
+        )
+
+        return _stream
+
+    def _generate_vm(self, vm_conf):
+        """
+        create packet fields trex vm instance
+        """
+        if not vm_conf:
+            return None
+        # config packet vm format for trex
+        hVmConfig = TrexConfigVm()
+        _vm_var = hVmConfig.config_trex_vm(vm_conf)
+        if not isinstance(_vm_var, self.STLVM):
+            vm_var = self.STLVM()
+            self._preset_layers(vm_var, _vm_var)
+        else:
+            vm_var = _vm_var
+
+        return vm_var
+
+    def _get_streams(self, streams_config):
+        """
+        create a group of streams
+        """
+        # vm_var is the instance to config pcap fields
+        # create a group of streams, which are using different size payload
+        streams = []
+
+        for config in streams_config:
+            _pkt = config.get("pcap")
+            vm_conf = config.get("fields_config")
+            _stream_op = config.get("stream_config")
+            # configure trex vm
+            vm_var = self._generate_vm(vm_conf)
+            # create
+            streams.append(self._create_stream(_pkt, _stream_op, vm_var))
+        _streams = self.STLProfile(streams).get_streams()
+
+        return _streams
+
+    def add_streams(self, conn, streams_config, ports=None, latency=False):
+        """
+        create one/multiple of streams on one port of trex server
+        """
+        # normal streams configuration
+        _streams = self._get_streams(streams_config)
+        # create latency statistics stream
+        # use first one of main stream config as latency statistics stream
+        if latency:
+            streams = list(_streams)
+            flow_stats = self.STLFlowLatencyStats(pg_id=ports[0])
+            latency_opt = streams_config[0]
+            _pkt = latency_opt.get("pcap")
+            _stream_op = latency_opt.get("stream_config")
+            _stream = self._create_stream(_pkt, _stream_op, flow_stats=flow_stats)
+            streams.append(_stream)
+        else:
+            streams = _streams
+
+        conn.add_streams(streams, ports=ports)
+
+
+class TrexPacketGenerator(PacketGenerator):
+    """
+    Trex packet generator, detail usage can be seen at
+    https://trex-tgn.cisco.com/trex/doc/trex_manual.html
+    """
+
+    def __init__(self, tester):
+        super(TrexPacketGenerator, self).__init__(tester)
+        self.pktgen_type = PKTGEN_TREX
+        self.trex_app = "t-rex-64"
+        self._conn = None
+        self.control_session = None
+        # trex management
+        self._traffic_opt = {}
+        self._ports = []
+        self._traffic_ports = []
+        self._rx_ports = []
+
+        conf_inst = self._get_generator_conf_instance()
+        self.conf = conf_inst.load_pktgen_config()
+
+        self.options_keys = ["txmode", "ip", "vlan", "transmit_mode", "rate"]
+        self.ip_keys = ["start", "end", "action", "mask", "step"]
+        self.vlan_keys = ["start", "end", "action", "step", "count"]
+
+        # check trex binary file
+        trex_bin = os.sep.join([self.conf.get("trex_root_path"), self.trex_app])
+        if not os.path.exists(trex_bin):
+            msg = "{0} is not existed, please check {1} content".format(
+                trex_bin, conf_inst.config_file
+            )
+            raise Exception(msg)
+        # if `trex_lib_path` is not set, use a default relative directory.
+        trex_lib_dir = (
+            self.conf.get("trex_lib_path")
+            if self.conf.get("trex_lib_path")
+            else "{0}/automation/trex_control_plane/stl".format(
+                self.conf.get("trex_root_path")
+            )
+        )
+        # check trex lib root directory
+        if not os.path.exists(trex_lib_dir):
+            msg = (
+                "{0} is not existed, please check {1} content and "
+                "set `trex_lib_path`"
+            ).format(trex_lib_dir, conf_inst.config_file)
+            raise Exception(msg)
+        # check if trex lib is existed
+        trex_lib = os.sep.join([trex_lib_dir, "trex_stl_lib"])
+        if not os.path.exists(trex_lib):
+            msg = "no 'trex_stl_lib' package under {0}".format(trex_lib_dir)
+            raise Exception(msg)
+        # import t-rex libs
+        sys.path.insert(0, trex_lib_dir)
+        from trex_stl_lib.api import STLClient
+
+        # set trex class
+        self.STLClient = STLClient
+        # get configuration from pktgen config file
+        self._get_traffic_option()
+
+    def _get_traffic_option(self):
+        """get configuration from pktgen config file"""
+        # set trex coremask
+        _core_mask = self.conf.get("core_mask")
+        if _core_mask:
+            if "0x" in _core_mask:
+                self.core_mask = [int(item[2:], 16) for item in _core_mask.split(",")]
+            else:
+                self.core_mask = (
+                    self.STLClient.CORE_MASK_PIN
+                    if _core_mask.upper() == "CORE_MASK_PIN"
+                    else None
+                )
+        else:
+            self.core_mask = None
+
+    def _connect(self):
+        self._conn = self.STLClient(server=self.conf["server"])
+        self._conn.connect()
+        for p in self._conn.get_all_ports():
+            self._ports.append(p)
+
+        self.logger.debug(self._ports)
+
+    def _get_port_pci(self, port_id):
+        """
+        get port pci address
+        """
+        for name, _port_obj in self._conn.ports.items():
+            if name == port_id:
+                _pci = _port_obj.info["pci_addr"]
+                return _pci
+        else:
+            return None
+
+    def _get_gen_port(self, pci):
+        """
+        get port management id of the packet generator
+        """
+        for name, _port_obj in self._conn.ports.items():
+            _pci = _port_obj.info["pci_addr"]
+            if _pci == pci:
+                return name
+        else:
+            return -1
+
+    def _is_gen_port(self, pci):
+        """
+        check if a pci address is managed by the packet generator
+        """
+        for name, _port_obj in self._conn.ports.items():
+            _pci = _port_obj.info["pci_addr"]
+            self.logger.debug((_pci, pci))
+            if _pci == pci:
+                return True
+        else:
+            return False
+
+    def get_ports(self):
+        """
+        Return self ports information
+        """
+        ports = []
+        for idx in range(len(self._ports)):
+            port_info = self._conn.ports[idx]
+            pci = port_info.info["pci_addr"]
+            mac = port_info.info["hw_mac"]
+            ports.append(
+                {
+                    "intf": "TREX:%d" % idx,
+                    "mac": mac,
+                    "pci": pci,
+                    "type": "trex",
+                }
+            )
+        return ports
+
+    def _clear_streams(self):
+        """clear streams in trex and `PacketGenerator`"""
+        # if streams has been attached, remove them from trex server.
+        self._remove_all_streams()
+
+    def _remove_all_streams(self):
+        """remove all stream deployed on trex port(s)"""
+        if not self.get_streams():
+            return
+        if not self._conn.get_acquired_ports():
+            return
+        self._conn.remove_all_streams()
+
+    def _disconnect(self):
+        """disconnect with trex server"""
+        try:
+            self._remove_all_streams()
+            self._conn.disconnect()
+        except Exception as e:
+            msg = "Error disconnecting: %s" % e
+            self.logger.error(msg)
+        self._conn = None
+
+    def _check_options(self, opts={}):
+        return True  # close it and wait for more discussion about pktgen framework
+        for key in opts:
+            if key in self.options_keys:
+                if key == "ip":
+                    ip = opts["ip"]
+                    for ip_key in ip:
+                        if not ip_key in self.ip_keys:
+                            msg = " %s is invalid ip option" % ip_key
+                            self.logger.info(msg)
+                            return False
+                        if key == "action":
+                            if not ip[key] == "inc" or not ip[key] == "dec":
+                                msg = " %s is invalid ip action" % ip[key]
+                                self.logger.info(msg)
+                                return False
+                elif key == "vlan":
+                    vlan = opts["vlan"]
+                    for vlan_key in vlan:
+                        if not vlan_key in self.vlan_keys:
+                            msg = " %s is invalid vlan option" % vlan_key
+                            self.logger.info(msg)
+                            return False
+                        if key == "action":
+                            if not vlan[key] == "inc" or not ip[key] == "dec":
+                                msg = " %s is invalid vlan action" % vlan[key]
+                                self.logger.info(msg)
+                                return False
+            else:
+                msg = " %s is invalid option" % key
+                self.logger.info(msg)
+                return False
+        return True
+
+    def _prepare_generator(self):
+        """start trex server"""
+        if "start_trex" in self.conf and self.conf["start_trex"]:
+            app_param_temp = "-i"
+            # flow control
+            flow_control = self.conf.get("flow_control")
+            flow_control_opt = "--no-flow-control-change" if flow_control else ""
+
+            for key in self.conf:
+                # key, value = pktgen_conf
+                if key == "config_file":
+                    app_param_temp = app_param_temp + " --cfg " + self.conf[key]
+                elif key == "core_num":
+                    app_param_temp = app_param_temp + " -c " + self.conf[key]
+            self.control_session = self.tester.create_session(PKTGEN)
+            self.control_session.send_expect(
+                ";".join(
+                    [
+                        "cd " + self.conf["trex_root_path"],
+                        "./" + self.trex_app + " " + app_param_temp,
+                    ]
+                ),
+                "-Per port stats table",
+                30,
+            )
+        try:
+            self._connect()
+        except Exception as e:
+            msg = "failed to connect to t-rex server"
+            raise Exception(msg)
+
+    @property
+    def _vm_conf(self):
+        return None  # close it and wait for more discussion about pktgen framework
+        conf = {}
+        # get the subnet range of src and dst ip
+        if "ip_src" in self.conf:
+            conf["src"] = {}
+            ip_src = self.conf["ip_src"]
+            ip_src_range = ip_src.split("-")
+            conf["src"]["start"] = ip_src_range[0]
+            conf["src"]["end"] = ip_src_range[1]
+
+        if "ip_dst" in self.conf:
+            conf["dst"] = {}
+            ip_dst = self.conf["ip_dst"]
+            ip_dst_range = ip_dst.split("-")
+            conf["dst"]["start"] = ip_dst_range[0]
+            conf["dst"]["end"] = ip_dst_range[1]
+
+        if conf:
+            return conf
+        else:
+            return None
+
+    def _get_port_features(self, port_id):
+        """get ports' features"""
+        ports = self._conn.ports
+        if port_id not in ports:
+            return None
+        features = self._conn.ports[port_id].get_formatted_info()
+        self.logger.debug(pformat(features))
+
+        return features
+
+    def _is_support_flow_control(self, port_id):
+        """check if a port support flow control"""
+        features = self._get_port_features(port_id)
+        if not features or features.get("fc_supported") == "no":
+            msg = "trex port <{0}> not support flow control".format(port_id)
+            self.logger.debug(msg)
+            return False
+        else:
+            return True
+
+    def _preset_trex_port(self):
+        """set ports promiscuous/flow_ctrl attribute"""
+        rx_ports = self._rx_ports
+        # for trex design requirement, all ports of trex should be the same type
+        # nic, here use first port to check flow control attribute
+        flow_ctrl = (
+            self._traffic_opt.get("flow_control")
+            if self._is_support_flow_control(rx_ports[0])
+            else None
+        )
+        flow_ctrl_flag = flow_ctrl.get("flag") or 1 if flow_ctrl else None
+        # flow control of port running trex traffic
+        self._conn.set_port_attr(
+            rx_ports, promiscuous=True, link_up=True, flow_ctrl=flow_ctrl_flag
+        )
+
+    def _throughput_stats(self, stream, stats):
+        # tx packet
+        tx_port_id = stream["tx_port"]
+        port_stats = stats.get(tx_port_id)
+        if not port_stats:
+            msg = "failed to get tx_port {0} statistics".format(tx_port_id)
+            raise Exception(msg)
+        tx_bps = port_stats.get("tx_bps")
+        tx_pps = port_stats.get("tx_pps")
+        msg = [
+            "Tx Port %d stats: " % (tx_port_id),
+            "tx_port: %d,  tx_bps: %f, tx_pps: %f " % (tx_port_id, tx_bps, tx_pps),
+        ]
+        self.logger.debug(pformat(port_stats))
+        self.logger.debug(os.linesep.join(msg))
+        # rx bps/pps
+        rx_port_id = stream["rx_port"]
+        port_stats = stats.get(rx_port_id)
+        if not port_stats:
+            msg = "failed to get rx_port {0} statistics".format(rx_port_id)
+            raise Exception(msg)
+        rx_bps = port_stats.get("rx_bps")
+        rx_pps = port_stats.get("rx_pps")
+        msg = [
+            "Rx Port %d stats: " % (rx_port_id),
+            "rx_port: %d,  rx_bps: %f, rx_pps: %f" % (rx_port_id, rx_bps, rx_pps),
+        ]
+
+        self.logger.debug(pformat(port_stats))
+        self.logger.debug(os.linesep.join(msg))
+
+        return (tx_bps, rx_bps), (tx_pps, rx_pps)
+
+    def _loss_rate_stats(self, stream, stats):
+        # tx packet
+        port_id = stream.get("tx_port")
+        if port_id in list(stats.keys()):
+            port_stats = stats[port_id]
+        else:
+            msg = "port {0} statistics is not found".format(port_id)
+            self.logger.error(msg)
+            return None
+        msg = "Tx Port %d stats: " % (port_id)
+        self.logger.debug(msg)
+        opackets = port_stats["opackets"]
+        # rx packet
+        port_id = stream.get("rx_port")
+        port_stats = stats[port_id]
+        msg = "Rx Port %d stats: " % (port_id)
+        self.logger.debug(msg)
+        ipackets = port_stats["ipackets"]
+
+        return opackets, ipackets
+
+    def _latency_stats(self, stream, stats):
+        _stats = stats.get("latency")
+        port_id = stream.get("tx_port")
+        if port_id in list(_stats.keys()):
+            port_stats = _stats[port_id]["latency"]
+        else:
+            msg = "port {0} latency stats is not found".format(port_id)
+            self.logger.error(msg)
+            return None
+
+        latency_stats = {
+            "min": port_stats.get("total_min"),
+            "max": port_stats.get("total_max"),
+            "average": port_stats.get("average"),
+        }
+
+        return latency_stats
+
+    def _prepare_transmission(self, stream_ids=[], latency=False):
+        """add one/multiple streams in one/multiple ports"""
+        port_config = {}
+        self._traffic_ports = []
+        for stream_id in stream_ids:
+            stream = self._get_stream(stream_id)
+            tx_port = stream["tx_port"]
+            rx_port = stream["rx_port"]
+            # save port id list
+            if tx_port not in self._traffic_ports:
+                self._traffic_ports.append(tx_port)
+            if rx_port not in self._rx_ports:
+                self._rx_ports.append(rx_port)
+            # set all streams in one port to do batch configuration
+            options = stream["options"]
+            if tx_port not in list(port_config.keys()):
+                port_config[tx_port] = []
+            config = {}
+            config.update(options)
+            # since trex stream rate percent haven't taken effect, here use one
+            # stream rate percent as port rate percent. In pktgen, all streams
+            # rate percent are the same value by design. flow control option is
+            # the same.
+            stream_config = options.get("stream_config") or {}
+            self._traffic_opt["rate"] = stream_config.get("rate") or 100
+            if stream_config.get("pps"):  # reserve feature
+                self._traffic_opt["pps"] = stream_config.get("pps")
+            # flow control option is deployed on all ports by design
+            self._traffic_opt["flow_control"] = options.get("flow_control") or {}
+            # if vm config by pktgen config file, set it here to take the place
+            # of user setting
+            if self._vm_conf:
+                config["fields_config"] = self._vm_conf
+            port_config[tx_port].append(config)
+
+        if not port_config:
+            msg = "no stream options for trex packet generator"
+            raise Exception(msg)
+
+        self._conn.connect()
+        self._conn.reset(ports=self._ports)
+        config_inst = TrexConfigStream()
+        for port_id, config in port_config.items():
+            # add a group of streams in one port
+            config_inst.add_streams(
+                self._conn, config, ports=[port_id], latency=latency
+            )
+        # preset port status before running traffic
+        self._preset_trex_port()
+
+    def _start_transmission(self, stream_ids, options={}):
+        test_mode = options.get("method")
+        # get rate percentage
+        rate_percent = "{0}%".format(
+            options.get("rate") or self._traffic_opt.get("rate") or "100"
+        )
+        # check the link status before transmission
+        self.logger.info("check the trex port link status")
+        for port in self._traffic_ports:
+            try_times = 0
+            port_attr = self._conn.get_port_attr(port)
+            while try_times < 5:
+                self.logger.info(pformat(port_attr))
+                if "link" in port_attr.keys() and port_attr["link"].lower() == "down":
+                    time.sleep(2)
+                    try_times = try_times + 1
+                    port_attr = self._conn.get_port_attr(port)
+                else:
+                    break
+                if try_times == 5 and port_attr["link"].lower() == "down":
+                    self.logger.error(
+                        "the port: %d link status is down, the transmission can not work right"
+                        % port
+                    )
+        try:
+            # clear the stats before injecting
+            self._conn.clear_stats()
+            # 'core_mask' list must be the same length as 'ports' list
+            core_mask = self.core_mask
+            if type(self.core_mask) == list:
+                core_mask = self.core_mask[: len(self._traffic_ports)]
+            # Start traffic on port(s)
+            run_opt = {
+                "ports": self._traffic_ports,
+                "mult": rate_percent,
+                "core_mask": core_mask,
+                "force": True,
+            }
+            self.logger.info("begin traffic ......")
+            self.logger.debug(run_opt)
+            self._conn.start(**run_opt)
+        except Exception as e:
+            self.logger.error(e)
+
+    def _stop_transmission(self, stream_id):
+        if self._traffic_ports:
+            self._conn.stop(ports=self._traffic_ports, rx_delay_ms=5000)
+            self.logger.info("traffic completed. ")
+
+    def _retrieve_port_statistic(self, stream_id, mode):
+        """
+        trex traffic statistics
+        """
+        stats = self._conn.get_stats()
+        stream = self._get_stream(stream_id)
+        self.logger.debug(pformat(stream))
+        self.logger.debug(pformat(stats))
+        if mode == "throughput":
+            return self._throughput_stats(stream, stats)
+        elif mode == "loss":
+            return self._loss_rate_stats(stream, stats)
+        elif mode == "latency":
+            return self._latency_stats(stream, stats)
+        else:
+            return None
+
+    def quit_generator(self):
+        if self._conn is not None:
+            self._disconnect()
+        if self.control_session is not None:
+            self.tester.alt_session.send_expect("pkill -f _t-rex-64", "# ")
+            time.sleep(5)
+            self.tester.destroy_session(self.control_session)
+            self.control_session = None
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 09/18] dts: merge DTS framework/ssh_connection.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (7 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 08/18] dts: merge DTS framework/pktgen_trex.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 10/18] dts: merge DTS framework/ssh_pexpect.py " Juraj Linkeš
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ssh_connection.py | 117 ++++++++++++++++++++++++++++++++
 1 file changed, 117 insertions(+)
 create mode 100644 dts/framework/ssh_connection.py

diff --git a/dts/framework/ssh_connection.py b/dts/framework/ssh_connection.py
new file mode 100644
index 0000000000..bfe6e6840b
--- /dev/null
+++ b/dts/framework/ssh_connection.py
@@ -0,0 +1,117 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+from .settings import TIMEOUT, USERNAME
+from .ssh_pexpect import SSHPexpect
+
+"""
+Global structure for saving connections
+"""
+CONNECTIONS = []
+
+
+class SSHConnection(object):
+
+    """
+    Module for create session to host.
+    Implement send_expect/copy function upper SSHPexpect module.
+    """
+
+    def __init__(self, host, session_name, username, password="", dut_id=0):
+        self.session = SSHPexpect(host, username, password, dut_id)
+        self.name = session_name
+        connection = {}
+        connection[self.name] = self.session
+        CONNECTIONS.append(connection)
+        self.history = None
+
+    def init_log(self, logger):
+        self.logger = logger
+        self.session.init_log(logger, self.name)
+
+    def set_history(self, history):
+        self.history = history
+
+    def send_expect(self, cmds, expected, timeout=15, verify=False):
+        self.logger.info(cmds)
+        out = self.session.send_expect(cmds, expected, timeout, verify)
+        if isinstance(out, str):
+            self.logger.debug(out.replace(cmds, ""))
+        if type(self.history) is list:
+            self.history.append({"command": cmds, "name": self.name, "output": out})
+        return out
+
+    def send_command(self, cmds, timeout=1):
+        self.logger.info(cmds)
+        out = self.session.send_command(cmds, timeout)
+        self.logger.debug(out.replace(cmds, ""))
+        if type(self.history) is list:
+            self.history.append({"command": cmds, "name": self.name, "output": out})
+        return out
+
+    def get_session_before(self, timeout=15):
+        out = self.session.get_session_before(timeout)
+        self.logger.debug(out)
+        return out
+
+    def close(self, force=False):
+        if getattr(self, "logger", None):
+            self.logger.logger_exit()
+
+        self.session.close(force)
+        connection = {}
+        connection[self.name] = self.session
+        try:
+            CONNECTIONS.remove(connection)
+        except:
+            pass
+
+    def isalive(self):
+        return self.session.isalive()
+
+    def check_available(self):
+        MAGIC_STR = "DTS_CHECK_SESSION"
+        out = self.session.send_command("echo %s" % MAGIC_STR, timeout=0.1)
+        # if not available, try to send ^C and check again
+        if MAGIC_STR not in out:
+            self.logger.info("Try to recover session...")
+            self.session.send_command("^C", timeout=TIMEOUT)
+            out = self.session.send_command("echo %s" % MAGIC_STR, timeout=0.1)
+            if MAGIC_STR not in out:
+                return False
+
+        return True
+
+    def copy_file_from(self, src, dst=".", password="", crb_session=None):
+        self.session.copy_file_from(src, dst, password, crb_session)
+
+    def copy_file_to(self, src, dst="~/", password="", crb_session=None):
+        self.session.copy_file_to(src, dst, password, crb_session)
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 10/18] dts: merge DTS framework/ssh_pexpect.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (8 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 09/18] dts: merge DTS framework/ssh_connection.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 11/18] dts: merge DTS framework/tester.py " Juraj Linkeš
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ssh_pexpect.py | 263 +++++++++++++++++++++++++++++++++++
 1 file changed, 263 insertions(+)
 create mode 100644 dts/framework/ssh_pexpect.py

diff --git a/dts/framework/ssh_pexpect.py b/dts/framework/ssh_pexpect.py
new file mode 100644
index 0000000000..97406896f0
--- /dev/null
+++ b/dts/framework/ssh_pexpect.py
@@ -0,0 +1,263 @@
+import time
+
+import pexpect
+from pexpect import pxssh
+
+from .debugger import aware_keyintr, ignore_keyintr
+from .exception import SSHConnectionException, SSHSessionDeadException, TimeoutException
+from .utils import GREEN, RED, parallel_lock
+
+"""
+Module handle ssh sessions between tester and DUT.
+Implements send_expect function to send command and get output data.
+Also supports transfer files to tester or DUT.
+"""
+
+
+class SSHPexpect:
+    def __init__(self, host, username, password, dut_id):
+        self.magic_prompt = "MAGIC PROMPT"
+        self.logger = None
+
+        self.host = host
+        self.username = username
+        self.password = password
+
+        self._connect_host(dut_id=dut_id)
+
+    @parallel_lock(num=8)
+    def _connect_host(self, dut_id=0):
+        """
+        Create connection to assigned crb, parameter dut_id will be used in
+        parallel_lock thus can assure isolated locks for each crb.
+        Parallel ssh connections are limited to MaxStartups option in SSHD
+        configuration file. By default concurrent number is 10, so default
+        threads number is limited to 8 which less than 10. Lock number can
+        be modified along with MaxStartups value.
+        """
+        retry_times = 10
+        try:
+            if ":" in self.host:
+                while retry_times:
+                    self.ip = self.host.split(":")[0]
+                    self.port = int(self.host.split(":")[1])
+                    self.session = pxssh.pxssh(encoding="utf-8")
+                    try:
+                        self.session.login(
+                            self.ip,
+                            self.username,
+                            self.password,
+                            original_prompt="[$#>]",
+                            port=self.port,
+                            login_timeout=20,
+                            password_regex=r"(?i)(?:password:)|(?:passphrase for key)|(?i)(password for .+:)",
+                        )
+                    except Exception as e:
+                        print(e)
+                        time.sleep(2)
+                        retry_times -= 1
+                        print("retry %d times connecting..." % (10 - retry_times))
+                    else:
+                        break
+                else:
+                    raise Exception("connect to %s:%s failed" % (self.ip, self.port))
+            else:
+                self.session = pxssh.pxssh(encoding="utf-8")
+                self.session.login(
+                    self.host,
+                    self.username,
+                    self.password,
+                    original_prompt="[$#>]",
+                    password_regex=r"(?i)(?:password:)|(?:passphrase for key)|(?i)(password for .+:)",
+                )
+            self.send_expect("stty -echo", "#")
+            self.send_expect("stty columns 1000", "#")
+        except Exception as e:
+            print(RED(e))
+            if getattr(self, "port", None):
+                suggestion = (
+                    "\nSuggession: Check if the firewall on [ %s ] " % self.ip
+                    + "is stopped\n"
+                )
+                print(GREEN(suggestion))
+
+            raise SSHConnectionException(self.host)
+
+    def init_log(self, logger, name):
+        self.logger = logger
+        self.logger.info("ssh %s@%s" % (self.username, self.host))
+
+    def send_expect_base(self, command, expected, timeout):
+        ignore_keyintr()
+        self.clean_session()
+        self.session.PROMPT = expected
+        self.__sendline(command)
+        self.__prompt(command, timeout)
+        aware_keyintr()
+
+        before = self.get_output_before()
+        return before
+
+    def send_expect(self, command, expected, timeout=15, verify=False):
+
+        try:
+            ret = self.send_expect_base(command, expected, timeout)
+            if verify:
+                ret_status = self.send_expect_base("echo $?", expected, timeout)
+                if not int(ret_status):
+                    return ret
+                else:
+                    self.logger.error("Command: %s failure!" % command)
+                    self.logger.error(ret)
+                    return int(ret_status)
+            else:
+                return ret
+        except Exception as e:
+            print(
+                RED(
+                    "Exception happened in [%s] and output is [%s]"
+                    % (command, self.get_output_before())
+                )
+            )
+            raise (e)
+
+    def send_command(self, command, timeout=1):
+        try:
+            ignore_keyintr()
+            self.clean_session()
+            self.__sendline(command)
+            aware_keyintr()
+        except Exception as e:
+            raise (e)
+
+        output = self.get_session_before(timeout=timeout)
+        self.session.PROMPT = self.session.UNIQUE_PROMPT
+        self.session.prompt(0.1)
+
+        return output
+
+    def clean_session(self):
+        self.get_session_before(timeout=0.01)
+
+    def get_session_before(self, timeout=15):
+        """
+        Get all output before timeout
+        """
+        ignore_keyintr()
+        self.session.PROMPT = self.magic_prompt
+        try:
+            self.session.prompt(timeout)
+        except Exception as e:
+            pass
+
+        aware_keyintr()
+        before = self.get_output_all()
+        self.__flush()
+
+        return before
+
+    def __flush(self):
+        """
+        Clear all session buffer
+        """
+        self.session.buffer = ""
+        self.session.before = ""
+
+    def __prompt(self, command, timeout):
+        if not self.session.prompt(timeout):
+            raise TimeoutException(command, self.get_output_all()) from None
+
+    def __sendline(self, command):
+        if not self.isalive():
+            raise SSHSessionDeadException(self.host)
+        if len(command) == 2 and command.startswith("^"):
+            self.session.sendcontrol(command[1])
+        else:
+            self.session.sendline(command)
+
+    def get_output_before(self):
+        if not self.isalive():
+            raise SSHSessionDeadException(self.host)
+        before = self.session.before.rsplit("\r\n", 1)
+        if before[0] == "[PEXPECT]":
+            before[0] = ""
+
+        return before[0]
+
+    def get_output_all(self):
+        output = self.session.before
+        output.replace("[PEXPECT]", "")
+        return output
+
+    def close(self, force=False):
+        if force is True:
+            self.session.close()
+        else:
+            if self.isalive():
+                self.session.logout()
+
+    def isalive(self):
+        return self.session.isalive()
+
+    def copy_file_from(self, src, dst=".", password="", crb_session=None):
+        """
+        Copies a file from a remote place into local.
+        """
+        command = "scp -v {0}@{1}:{2} {3}".format(self.username, self.host, src, dst)
+        if ":" in self.host:
+            command = "scp -v -P {0} -o NoHostAuthenticationForLocalhost=yes {1}@{2}:{3} {4}".format(
+                str(self.port), self.username, self.ip, src, dst
+            )
+        if password == "":
+            self._spawn_scp(command, self.password, crb_session)
+        else:
+            self._spawn_scp(command, password, crb_session)
+
+    def copy_file_to(self, src, dst="~/", password="", crb_session=None):
+        """
+        Sends a local file to a remote place.
+        """
+        command = "scp {0} {1}@{2}:{3}".format(src, self.username, self.host, dst)
+        if ":" in self.host:
+            command = "scp -v -P {0} -o NoHostAuthenticationForLocalhost=yes {1} {2}@{3}:{4}".format(
+                str(self.port), src, self.username, self.ip, dst
+            )
+        else:
+            command = "scp -v {0} {1}@{2}:{3}".format(
+                src, self.username, self.host, dst
+            )
+        if password == "":
+            self._spawn_scp(command, self.password, crb_session)
+        else:
+            self._spawn_scp(command, password, crb_session)
+
+    def _spawn_scp(self, scp_cmd, password, crb_session):
+        """
+        Transfer a file with SCP
+        """
+        self.logger.info(scp_cmd)
+        # if crb_session is not None, copy file from/to crb env
+        # if crb_session is None, copy file from/to current dts env
+        if crb_session is not None:
+            crb_session.session.clean_session()
+            crb_session.session.__sendline(scp_cmd)
+            p = crb_session.session.session
+        else:
+            p = pexpect.spawn(scp_cmd)
+        time.sleep(0.5)
+        ssh_newkey = "Are you sure you want to continue connecting"
+        i = p.expect(
+            [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120
+        )
+        if i == 0:  # add once in trust list
+            p.sendline("yes")
+            i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
+
+        if i == 1:
+            time.sleep(0.5)
+            p.sendline(password)
+            p.expect("Exit status 0", 60)
+        if i == 4:
+            self.logger.error("SCP TIMEOUT error %d" % i)
+        if crb_session is None:
+            p.close()
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 11/18] dts: merge DTS framework/tester.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (9 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 10/18] dts: merge DTS framework/ssh_pexpect.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 12/18] dts: merge DTS framework/ixia_network/__init__.py " Juraj Linkeš
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/tester.py | 910 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 910 insertions(+)
 create mode 100644 dts/framework/tester.py

diff --git a/dts/framework/tester.py b/dts/framework/tester.py
new file mode 100644
index 0000000000..d387983fa8
--- /dev/null
+++ b/dts/framework/tester.py
@@ -0,0 +1,910 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2019 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""
+Interface for bulk traffic generators.
+"""
+
+import os
+import random
+import re
+import subprocess
+from multiprocessing import Process
+from time import sleep
+
+from nics.net_device import GetNicObj
+
+from .config import PktgenConf
+from .crb import Crb
+from .exception import ParameterInvalidException
+from .packet import (
+    Packet,
+    compare_pktload,
+    get_scapy_module_impcmd,
+    start_tcpdump,
+    stop_and_load_tcpdump_packets,
+    strip_pktload,
+)
+from .pktgen import getPacketGenerator
+from .settings import (
+    NICS,
+    PERF_SETTING,
+    PKTGEN,
+    PKTGEN_GRP,
+    USERNAME,
+    load_global_setting,
+)
+from .utils import GREEN, check_crb_python_version, convert_int2ip, convert_ip2int
+
+
+class Tester(Crb):
+
+    """
+    Start the DPDK traffic generator on the machine `target`.
+    A config file and pcap file must have previously been copied
+    to this machine.
+    """
+
+    PORT_INFO_CACHE_KEY = "tester_port_info"
+    CORE_LIST_CACHE_KEY = "tester_core_list"
+    NUMBER_CORES_CACHE_KEY = "tester_number_cores"
+    PCI_DEV_CACHE_KEY = "tester_pci_dev_info"
+
+    def __init__(self, crb, serializer):
+        self.NAME = "tester"
+        self.scapy_session = None
+        super(Tester, self).__init__(crb, serializer, name=self.NAME)
+        # check the python version of tester
+        check_crb_python_version(self)
+
+        self.bgProcIsRunning = False
+        self.duts = []
+        self.inBg = 0
+        self.scapyCmds = []
+        self.bgCmds = []
+        self.bgItf = ""
+        self.re_run_time = 0
+        self.pktgen = None
+        # prepare for scapy env
+        self.scapy_sessions_li = list()
+        self.scapy_session = self.prepare_scapy_env()
+        self.check_scapy_version()
+        self.tmp_file = "/tmp/tester/"
+        out = self.send_expect("ls -d %s" % self.tmp_file, "# ", verify=True)
+        if out == 2:
+            self.send_expect("mkdir -p %s" % self.tmp_file, "# ")
+
+    def prepare_scapy_env(self):
+        session_name = (
+            "tester_scapy"
+            if not self.scapy_sessions_li
+            else f"tester_scapy_{random.random()}"
+        )
+        session = self.create_session(session_name)
+        self.scapy_sessions_li.append(session)
+        session.send_expect("scapy", ">>> ")
+
+        # import scapy moudle to scapy APP
+        out = session.session.send_expect(get_scapy_module_impcmd(), ">>> ")
+        if "ImportError" in out:
+            session.logger.warning(f"entering import error: {out}")
+
+        return session
+
+    def check_scapy_version(self):
+        require_version = "2.4.4"
+        self.scapy_session.get_session_before(timeout=1)
+        self.scapy_session.send_expect("conf.version", "'")
+        out = self.scapy_session.get_session_before(timeout=1)
+        cur_version = out[: out.find("'")]
+        out = self.session.send_expect("grep scapy requirements.txt", "# ")
+        value = re.search("scapy\s*==\s*(\S*)", out)
+        if value is not None:
+            require_version = value.group(1)
+        if cur_version != require_version:
+            self.logger.warning(
+                "The scapy vesrion not meet the requirement on tester,"
+                + "please update your scapy, otherwise maybe some suite will failed"
+            )
+
+    def init_ext_gen(self):
+        """
+        Initialize tester packet generator object.
+        """
+        if self.it_uses_external_generator():
+            if self.is_pktgen:
+                self.pktgen_init()
+            return
+
+    def set_re_run(self, re_run_time):
+        """
+        set failed case re-run time
+        """
+        self.re_run_time = int(re_run_time)
+
+    def get_ip_address(self):
+        """
+        Get ip address of tester CRB.
+        """
+        return self.crb["tester IP"]
+
+    def get_username(self):
+        """
+        Get login username of tester CRB.
+        """
+        return USERNAME
+
+    def get_password(self):
+        """
+        Get tester login password of tester CRB.
+        """
+        return self.crb["tester pass"]
+
+    @property
+    def is_pktgen(self):
+        """
+        Check whether packet generator is configured.
+        """
+        if PKTGEN not in self.crb or not self.crb[PKTGEN]:
+            return False
+
+        if self.crb[PKTGEN].lower() in PKTGEN_GRP:
+            return True
+        else:
+            msg = os.linesep.join(
+                [
+                    "Packet generator <{0}> is not supported".format(self.crb[PKTGEN]),
+                    "Current supports: {0}".format(" | ".join(PKTGEN_GRP)),
+                ]
+            )
+            self.logger.info(msg)
+            return False
+
+    def has_external_traffic_generator(self):
+        """
+        Check whether performance test will base on IXIA equipment.
+        """
+        try:
+            # if pktgen_group is set, take pktgen config file as first selection
+            if self.is_pktgen:
+                return True
+        except Exception as e:
+            return False
+
+        return False
+
+    def it_uses_external_generator(self):
+        """
+        Check whether IXIA generator is ready for performance test.
+        """
+        return (
+            load_global_setting(PERF_SETTING) == "yes"
+            and self.has_external_traffic_generator()
+        )
+
+    def tester_prerequisites(self):
+        """
+        Prerequest function should be called before execute any test case.
+        Will call function to scan all lcore's information which on Tester.
+        Then call pci scan function to collect nic device information.
+        Then discovery the network topology and save it into cache file.
+        At last setup DUT' environment for validation.
+        """
+        self.init_core_list()
+        self.pci_devices_information()
+        self.restore_interfaces()
+        self.scan_ports()
+
+        self.disable_lldp()
+
+    def disable_lldp(self):
+        """
+        Disable tester ports LLDP.
+        """
+        result = self.send_expect("lldpad -d", "# ")
+        if result:
+            self.logger.error(result.strip())
+
+        for port in self.ports_info:
+            if not "intf" in list(port.keys()):
+                continue
+            eth = port["intf"]
+            out = self.send_expect(
+                "ethtool --show-priv-flags %s" % eth, "# ", alt_session=True
+            )
+            if "disable-fw-lldp" in out:
+                self.send_expect(
+                    "ethtool --set-priv-flags %s disable-fw-lldp on" % eth,
+                    "# ",
+                    alt_session=True,
+                )
+            self.send_expect(
+                "lldptool set-lldp -i %s adminStatus=disabled" % eth,
+                "# ",
+                alt_session=True,
+            )
+
+    def get_local_port(self, remotePort):
+        """
+        Return tester local port connect to specified dut port.
+        """
+        return self.duts[0].ports_map[remotePort]
+
+    def get_local_port_type(self, remotePort):
+        """
+        Return tester local port type connect to specified dut port.
+        """
+        return self.ports_info[self.get_local_port(remotePort)]["type"]
+
+    def get_local_port_bydut(self, remotePort, dutIp):
+        """
+        Return tester local port connect to specified port and specified dut.
+        """
+        for dut in self.duts:
+            if dut.crb["My IP"] == dutIp:
+                return dut.ports_map[remotePort]
+
+    def get_local_index(self, pci):
+        """
+        Return tester local port index by pci id
+        """
+        index = -1
+        for port in self.ports_info:
+            index += 1
+            if pci == port["pci"]:
+                return index
+        return -1
+
+    def get_pci(self, localPort):
+        """
+        Return tester local port pci id.
+        """
+        if localPort == -1:
+            raise ParameterInvalidException("local port should not be -1")
+
+        return self.ports_info[localPort]["pci"]
+
+    def get_interface(self, localPort):
+        """
+        Return tester local port interface name.
+        """
+        if localPort == -1:
+            raise ParameterInvalidException("local port should not be -1")
+
+        if "intf" not in self.ports_info[localPort]:
+            return "N/A"
+
+        return self.ports_info[localPort]["intf"]
+
+    def get_mac(self, localPort):
+        """
+        Return tester local port mac address.
+        """
+        if localPort == -1:
+            raise ParameterInvalidException("local port should not be -1")
+
+        if self.ports_info[localPort]["type"] in ("ixia", "trex"):
+            return "00:00:00:00:00:01"
+        else:
+            return self.ports_info[localPort]["mac"]
+
+    def get_port_status(self, port):
+        """
+        Return link status of ethernet.
+        """
+        eth = self.ports_info[port]["intf"]
+        out = self.send_expect("ethtool %s" % eth, "# ")
+
+        status = re.search(r"Link detected:\s+(yes|no)", out)
+        if not status:
+            self.logger.error("ERROR: unexpected output")
+
+        if status.group(1) == "yes":
+            return "up"
+        else:
+            return "down"
+
+    def restore_interfaces(self):
+        """
+        Restore Linux interfaces.
+        """
+        if self.skip_setup:
+            return
+
+        self.send_expect("modprobe igb", "# ", 20)
+        self.send_expect("modprobe ixgbe", "# ", 20)
+        self.send_expect("modprobe e1000e", "# ", 20)
+        self.send_expect("modprobe e1000", "# ", 20)
+
+        try:
+            for (pci_bus, pci_id) in self.pci_devices_info:
+                addr_array = pci_bus.split(":")
+                port = GetNicObj(self, addr_array[0], addr_array[1], addr_array[2])
+                itf = port.get_interface_name()
+                self.enable_ipv6(itf)
+                self.send_expect("ifconfig %s up" % itf, "# ")
+                if port.get_interface2_name():
+                    itf = port.get_interface2_name()
+                    self.enable_ipv6(itf)
+                    self.send_expect("ifconfig %s up" % itf, "# ")
+
+        except Exception as e:
+            self.logger.error(f"   !!! Restore ITF: {e}")
+
+        sleep(2)
+
+    def restore_trex_interfaces(self):
+        """
+        Restore Linux interfaces used by trex
+        """
+        try:
+            for port_info in self.ports_info:
+                nic_type = port_info.get("type")
+                if nic_type != "trex":
+                    continue
+                pci_bus = port_info.get("pci")
+                port_inst = port_info.get("port")
+                port_inst.bind_driver()
+                itf = port_inst.get_interface_name()
+                self.enable_ipv6(itf)
+                self.send_expect("ifconfig %s up" % itf, "# ")
+                if port_inst.get_interface2_name():
+                    itf = port_inst.get_interface2_name()
+                    self.enable_ipv6(itf)
+                    self.send_expect("ifconfig %s up" % itf, "# ")
+        except Exception as e:
+            self.logger.error(f"   !!! Restore ITF: {e}")
+
+        sleep(2)
+
+    def set_promisc(self):
+        try:
+            for (pci_bus, pci_id) in self.pci_devices_info:
+                addr_array = pci_bus.split(":")
+                port = GetNicObj(self, addr_array[0], addr_array[1], addr_array[2])
+                itf = port.get_interface_name()
+                self.enable_promisc(itf)
+                if port.get_interface2_name():
+                    itf = port.get_interface2_name()
+                    self.enable_promisc(itf)
+        except Exception as e:
+            pass
+
+    def load_serializer_ports(self):
+        cached_ports_info = self.serializer.load(self.PORT_INFO_CACHE_KEY)
+        if cached_ports_info is None:
+            return
+
+        # now not save netdev object, will implemented later
+        self.ports_info = cached_ports_info
+
+    def save_serializer_ports(self):
+        cached_ports_info = []
+        for port in self.ports_info:
+            port_info = {}
+            for key in list(port.keys()):
+                if type(port[key]) is str:
+                    port_info[key] = port[key]
+                # need save netdev objects
+            cached_ports_info.append(port_info)
+        self.serializer.save(self.PORT_INFO_CACHE_KEY, cached_ports_info)
+
+    def _scan_pktgen_ports(self):
+        """packet generator port setting
+        Currently, trex run on tester node
+        """
+        new_ports_info = []
+        pktgen_ports_info = self.pktgen.get_ports()
+        for pktgen_port_info in pktgen_ports_info:
+            pktgen_port_type = pktgen_port_info["type"]
+            if pktgen_port_type.lower() == "ixia":
+                self.ports_info.extend(pktgen_ports_info)
+                break
+            pktgen_port_name = pktgen_port_info["intf"]
+            pktgen_pci = pktgen_port_info["pci"]
+            pktgen_mac = pktgen_port_info["mac"]
+            for port_info in self.ports_info:
+                dts_pci = port_info["pci"]
+                if dts_pci != pktgen_pci:
+                    continue
+                port_info["intf"] = pktgen_port_name
+                port_info["type"] = pktgen_port_type
+                port_info["mac"] = pktgen_mac
+                break
+            # Since tester port scanning work flow change, non-functional port
+            # mapping config will be ignored. Add tester port mapping if no
+            # port in ports info
+            else:
+                addr_array = pktgen_pci.split(":")
+                port = GetNicObj(self, addr_array[0], addr_array[1], addr_array[2])
+                new_ports_info.append(
+                    {
+                        "port": port,
+                        "intf": pktgen_port_name,
+                        "type": pktgen_port_type,
+                        "pci": pktgen_pci,
+                        "mac": pktgen_mac,
+                        "ipv4": None,
+                        "ipv6": None,
+                    }
+                )
+        if new_ports_info:
+            self.ports_info = self.ports_info + new_ports_info
+
+    def scan_ports(self):
+        """
+        Scan all ports on tester and save port's pci/mac/interface.
+        """
+        if self.read_cache:
+            self.load_serializer_ports()
+            self.scan_ports_cached()
+
+        if not self.read_cache or self.ports_info is None:
+            self.scan_ports_uncached()
+            if self.it_uses_external_generator():
+                if self.is_pktgen:
+                    self._scan_pktgen_ports()
+            self.save_serializer_ports()
+
+        for port_info in self.ports_info:
+            self.logger.info(port_info)
+
+    def scan_ports_cached(self):
+        if self.ports_info is None:
+            return
+
+        for port_info in self.ports_info:
+            if port_info["type"].lower() in ("ixia", "trex"):
+                continue
+
+            addr_array = port_info["pci"].split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+
+            port = GetNicObj(self, domain_id, bus_id, devfun_id)
+            intf = port.get_interface_name()
+
+            self.logger.info(
+                "Tester cached: [000:%s %s] %s"
+                % (port_info["pci"], port_info["type"], intf)
+            )
+            port_info["port"] = port
+
+    def scan_ports_uncached(self):
+        """
+        Return tester port pci/mac/interface information.
+        """
+        self.ports_info = []
+
+        for (pci_bus, pci_id) in self.pci_devices_info:
+            # ignore unknown card types
+            if pci_id not in list(NICS.values()):
+                self.logger.info("Tester: [%s %s] %s" % (pci_bus, pci_id, "unknow_nic"))
+                continue
+
+            addr_array = pci_bus.split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+
+            port = GetNicObj(self, domain_id, bus_id, devfun_id)
+            intf = port.get_interface_name()
+
+            if "No such file" in intf:
+                self.logger.info(
+                    "Tester: [%s %s] %s" % (pci_bus, pci_id, "unknow_interface")
+                )
+                continue
+
+            self.logger.info("Tester: [%s %s] %s" % (pci_bus, pci_id, intf))
+            macaddr = port.get_mac_addr()
+
+            ipv6 = port.get_ipv6_addr()
+            ipv4 = port.get_ipv4_addr()
+
+            # store the port info to port mapping
+            self.ports_info.append(
+                {
+                    "port": port,
+                    "pci": pci_bus,
+                    "type": pci_id,
+                    "intf": intf,
+                    "mac": macaddr,
+                    "ipv4": ipv4,
+                    "ipv6": ipv6,
+                }
+            )
+
+            # return if port is not connect x3
+            if not port.get_interface2_name():
+                continue
+
+            intf = port.get_interface2_name()
+
+            self.logger.info("Tester: [%s %s] %s" % (pci_bus, pci_id, intf))
+            macaddr = port.get_intf2_mac_addr()
+
+            ipv6 = port.get_ipv6_addr()
+
+            # store the port info to port mapping
+            self.ports_info.append(
+                {
+                    "port": port,
+                    "pci": pci_bus,
+                    "type": pci_id,
+                    "intf": intf,
+                    "mac": macaddr,
+                    "ipv6": ipv6,
+                }
+            )
+
+    def pktgen_init(self):
+        """
+        initialize packet generator instance
+        """
+        pktgen_type = self.crb[PKTGEN]
+        # init packet generator instance
+        self.pktgen = getPacketGenerator(self, pktgen_type)
+        # prepare running environment
+        self.pktgen.prepare_generator()
+
+    def send_ping(self, localPort, ipv4, mac):
+        """
+        Send ping4 packet from local port with destination ipv4 address.
+        """
+        if self.ports_info[localPort]["type"].lower() in ("ixia", "trex"):
+            return "Not implemented yet"
+        else:
+            return self.send_expect(
+                "ping -w 5 -c 5 -A -I %s %s"
+                % (self.ports_info[localPort]["intf"], ipv4),
+                "# ",
+                10,
+            )
+
+    def send_ping6(self, localPort, ipv6, mac):
+        """
+        Send ping6 packet from local port with destination ipv6 address.
+        """
+        if self.is_pktgen:
+            if self.ports_info[localPort]["type"].lower() in "ixia":
+                return self.pktgen.send_ping6(
+                    self.ports_info[localPort]["pci"], mac, ipv6
+                )
+            elif self.ports_info[localPort]["type"].lower() == "trex":
+                return "Not implemented yet"
+        else:
+            return self.send_expect(
+                "ping6 -w 5 -c 5 -A %s%%%s"
+                % (ipv6, self.ports_info[localPort]["intf"]),
+                "# ",
+                10,
+            )
+
+    def get_port_numa(self, port):
+        """
+        Return tester local port numa.
+        """
+        pci = self.ports_info[port]["pci"]
+        out = self.send_expect("cat /sys/bus/pci/devices/%s/numa_node" % pci, "#")
+        return int(out)
+
+    def check_port_list(self, portList, ftype="normal"):
+        """
+        Check specified port is IXIA port or normal port.
+        """
+        dtype = None
+        plist = set()
+        for txPort, rxPort, _ in portList:
+            plist.add(txPort)
+            plist.add(rxPort)
+
+        plist = list(plist)
+        if len(plist) > 0:
+            dtype = self.ports_info[plist[0]]["type"]
+
+        for port in plist[1:]:
+            if dtype != self.ports_info[port]["type"]:
+                return False
+
+        if ftype == "ixia" and dtype != ftype:
+            return False
+
+        return True
+
+    def scapy_append(self, cmd):
+        """
+        Append command into scapy command list.
+        """
+        self.scapyCmds.append(cmd)
+
+    def scapy_execute(self, timeout=60):
+        """
+        Execute scapy command list.
+        """
+        self.kill_all()
+
+        self.send_expect("scapy", ">>> ")
+        if self.bgProcIsRunning:
+            self.send_expect(
+                'subprocess.call("scapy -c sniff.py &", shell=True)', ">>> "
+            )
+            self.bgProcIsRunning = False
+        sleep(2)
+
+        for cmd in self.scapyCmds:
+            self.send_expect(cmd, ">>> ", timeout)
+
+        sleep(2)
+        self.scapyCmds = []
+        self.send_expect("exit()", "# ", timeout)
+
+    def scapy_background(self):
+        """
+        Configure scapy running in background mode which mainly purpose is
+        that save RESULT into scapyResult.txt.
+        """
+        self.inBg = True
+
+    def scapy_foreground(self):
+        """
+        Running background scapy and convert to foreground mode.
+        """
+        self.send_expect("echo -n '' >  scapyResult.txt", "# ")
+        if self.inBg:
+            self.scapyCmds.append("f = open('scapyResult.txt','w')")
+            self.scapyCmds.append("f.write(RESULT)")
+            self.scapyCmds.append("f.close()")
+            self.scapyCmds.append("exit()")
+
+            outContents = (
+                "import os\n"
+                + "conf.color_theme=NoTheme()\n"
+                + 'RESULT=""\n'
+                + "\n".join(self.scapyCmds)
+                + "\n"
+            )
+            self.create_file(outContents, "sniff.py")
+
+            self.logger.info("SCAPY Receive setup:\n" + outContents)
+
+            self.bgProcIsRunning = True
+            self.scapyCmds = []
+        self.inBg = False
+
+    def scapy_get_result(self):
+        """
+        Return RESULT which saved in scapyResult.txt.
+        """
+        out = self.send_expect("cat scapyResult.txt", "# ")
+        self.logger.info("SCAPY Result:\n" + out + "\n\n\n")
+
+        return out
+
+    def parallel_transmit_ptks(self, pkt=None, intf="", send_times=1, interval=0.01):
+        """
+        Callable function for parallel processes
+        """
+        print(GREEN("Transmitting and sniffing packets, please wait few minutes..."))
+        return pkt.send_pkt_bg_with_pcapfile(
+            crb=self, tx_port=intf, count=send_times, loop=0, inter=interval
+        )
+
+    def check_random_pkts(
+        self,
+        portList,
+        pktnum=2000,
+        interval=0.01,
+        allow_miss=True,
+        seq_check=False,
+        params=None,
+    ):
+        """
+        Send several random packets and check rx packets matched
+        """
+        tx_pkts = {}
+        rx_inst = {}
+        # packet type random between tcp/udp/ipv6
+        random_type = ["TCP", "UDP", "IPv6_TCP", "IPv6_UDP"]
+        for txport, rxport in portList:
+            txIntf = self.get_interface(txport)
+            rxIntf = self.get_interface(rxport)
+            self.logger.info(
+                GREEN("Preparing transmit packets, please wait few minutes...")
+            )
+            pkt = Packet()
+            pkt.generate_random_pkts(
+                pktnum=pktnum,
+                random_type=random_type,
+                ip_increase=True,
+                random_payload=True,
+                options={"layers_config": params},
+            )
+
+            tx_pkts[txport] = pkt
+            # sniff packets
+            inst = start_tcpdump(
+                self,
+                rxIntf,
+                count=pktnum,
+                filters=[
+                    {"layer": "network", "config": {"srcport": "65535"}},
+                    {"layer": "network", "config": {"dstport": "65535"}},
+                ],
+            )
+            rx_inst[rxport] = inst
+        bg_sessions = list()
+        for txport, _ in portList:
+            txIntf = self.get_interface(txport)
+            bg_sessions.append(
+                self.parallel_transmit_ptks(
+                    pkt=tx_pkts[txport], intf=txIntf, send_times=1, interval=interval
+                )
+            )
+        # Verify all packets
+        sleep(interval * pktnum + 1)
+        timeout = 60
+        for i in bg_sessions:
+            while timeout:
+                try:
+                    i.send_expect("", ">>> ", timeout=1)
+                except Exception as e:
+                    print(e)
+                    self.logger.info("wait for the completion of sending pkts...")
+                    timeout -= 1
+                    continue
+                else:
+                    break
+            else:
+                self.logger.info(
+                    "exceeded timeout, force to stop background packet sending to avoid dead loop"
+                )
+                Packet.stop_send_pkt_bg(i)
+        prev_id = -1
+        for txport, rxport in portList:
+            p = stop_and_load_tcpdump_packets(rx_inst[rxport])
+            recv_pkts = p.pktgen.pkts
+            # only report when received number not matched
+            if len(tx_pkts[txport].pktgen.pkts) > len(recv_pkts):
+                self.logger.info(
+                    (
+                        "Pkt number not matched,%d sent and %d received\n"
+                        % (len(tx_pkts[txport].pktgen.pkts), len(recv_pkts))
+                    )
+                )
+                if allow_miss is False:
+                    return False
+
+            # check each received packet content
+            self.logger.info(
+                GREEN("Comparing sniffed packets, please wait few minutes...")
+            )
+            for idx in range(len(recv_pkts)):
+                try:
+                    l3_type = p.strip_element_layer2("type", p_index=idx)
+                    sip = p.strip_element_layer3("dst", p_index=idx)
+                except Exception as e:
+                    continue
+                # ipv4 packet
+                if l3_type == 2048:
+                    t_idx = convert_ip2int(sip, 4)
+                # ipv6 packet
+                elif l3_type == 34525:
+                    t_idx = convert_ip2int(sip, 6)
+                else:
+                    continue
+
+                if seq_check:
+                    if t_idx <= prev_id:
+                        self.logger.info("Packet %d sequence not correct" % t_idx)
+                        return False
+                    else:
+                        prev_id = t_idx
+
+                if (
+                    compare_pktload(
+                        tx_pkts[txport].pktgen.pkts[idx], recv_pkts[idx], "L4"
+                    )
+                    is False
+                ):
+                    self.logger.warning(
+                        "Pkt received index %d not match original "
+                        "index %d" % (idx, idx)
+                    )
+                    self.logger.info(
+                        "Sent: %s"
+                        % strip_pktload(tx_pkts[txport].pktgen.pkts[idx], "L4")
+                    )
+                    self.logger.info("Recv: %s" % strip_pktload(recv_pkts[idx], "L4"))
+                    return False
+
+        return True
+
+    def tcpdump_sniff_packets(self, intf, count=0, filters=None, lldp_forbid=True):
+        """
+        Wrapper for packet module sniff_packets
+        """
+        inst = start_tcpdump(
+            self, intf=intf, count=count, filters=filters, lldp_forbid=lldp_forbid
+        )
+        return inst
+
+    def load_tcpdump_sniff_packets(self, index="", timeout=1):
+        """
+        Wrapper for packet module load_pcapfile
+        """
+        p = stop_and_load_tcpdump_packets(index, timeout=timeout)
+        return p
+
+    def kill_all(self, killall=False):
+        """
+        Kill all scapy process or DPDK application on tester.
+        """
+        if not self.has_external_traffic_generator():
+            out = self.session.send_command("")
+            if ">>>" in out:
+                self.session.send_expect("quit()", "# ", timeout=3)
+        if killall:
+            super(Tester, self).kill_all()
+
+    def close(self):
+        """
+        Close ssh session and IXIA tcl session.
+        """
+        if self.it_uses_external_generator():
+            if self.is_pktgen and self.pktgen:
+                self.pktgen.quit_generator()
+                # only restore ports if start trex in dts
+                if "start_trex" in list(self.pktgen.conf.keys()):
+                    self.restore_trex_interfaces()
+                self.pktgen = None
+
+        if self.scapy_sessions_li:
+            for i in self.scapy_sessions_li:
+                if i.session.isalive():
+                    i.session.send_expect("quit()", "#", timeout=2)
+                    i.session.close()
+            self.scapy_sessions_li.clear()
+
+        if self.alt_session:
+            self.alt_session.close()
+            self.alt_session = None
+        if self.session:
+            self.session.close()
+            self.session = None
+
+    def crb_exit(self):
+        """
+        Close all resource before crb exit
+        """
+        self.close()
+        self.logger.logger_exit()
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 12/18] dts: merge DTS framework/ixia_network/__init__.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (10 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 11/18] dts: merge DTS framework/tester.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 13/18] dts: merge DTS framework/ixia_network/ixnet.py " Juraj Linkeš
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ixia_network/__init__.py | 183 +++++++++++++++++++++++++
 1 file changed, 183 insertions(+)
 create mode 100644 dts/framework/ixia_network/__init__.py

diff --git a/dts/framework/ixia_network/__init__.py b/dts/framework/ixia_network/__init__.py
new file mode 100644
index 0000000000..98f4bcff2e
--- /dev/null
+++ b/dts/framework/ixia_network/__init__.py
@@ -0,0 +1,183 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+"""
+ixNetwork package
+"""
+import os
+import time
+import traceback
+from pprint import pformat
+
+from .ixnet import IxnetTrafficGenerator
+from .ixnet_config import IxiaNetworkConfig
+
+__all__ = [
+    "IxNetwork",
+]
+
+
+class IxNetwork(IxnetTrafficGenerator):
+    """
+    ixNetwork performance measurement class.
+    """
+
+    def __init__(self, name, config, logger):
+        self.NAME = name
+        self.logger = logger
+        ixiaRef = self.NAME
+        if ixiaRef not in config:
+            return
+        _config = config.get(ixiaRef, {})
+        self.ixiaVersion = _config.get("Version")
+        self.ports = _config.get("Ports")
+        ixia_ip = _config.get("IP")
+        rest_server_ip = _config.get("ixnet_api_server_ip")
+        self.max_retry = int(_config.get("max_retry") or "5")  # times
+        self.logger.debug(locals())
+        rest_config = IxiaNetworkConfig(
+            ixia_ip,
+            rest_server_ip,
+            "11009",
+            [[ixia_ip, p.get("card"), p.get("port")] for p in self.ports],
+        )
+        super(IxNetwork, self).__init__(rest_config, logger)
+        self._traffic_list = []
+        self._result = None
+
+    @property
+    def OUTPUT_DIR(self):
+        # get dts output folder path
+        if self.logger.log_path.startswith(os.sep):
+            output_path = self.logger.log_path
+        else:
+            cur_path = os.sep.join(os.path.realpath(__file__).split(os.sep)[:-2])
+            output_path = os.path.join(cur_path, self.logger.log_path)
+        if not os.path.exists(output_path):
+            os.makedirs(output_path)
+
+        return output_path
+
+    def get_ports(self):
+        """
+        get ixNetwork ports for dts `ports_info`
+        """
+        plist = []
+        for p in self.ports:
+            plist.append(
+                {
+                    "type": "ixia",
+                    "pci": "IXIA:%d.%d" % (p["card"], p["port"]),
+                }
+            )
+        return plist
+
+    def send_ping6(self, pci, mac, ipv6):
+        return "64 bytes from"
+
+    def disconnect(self):
+        """quit from ixNetwork api server"""
+        self.tear_down()
+        msg = "close ixNetwork session done !"
+        self.logger.info(msg)
+
+    def prepare_ixia_network_stream(self, traffic_list):
+        self._traffic_list = []
+        for txPort, rxPort, pcapFile, option in traffic_list:
+            stream = self.configure_streams(pcapFile, option.get("fields_config"))
+            tx_p = self.tg_vports[txPort]
+            rx_p = self.tg_vports[rxPort]
+            self._traffic_list.append((tx_p, rx_p, stream))
+
+    def start(self, options):
+        """start ixNetwork measurement"""
+        test_mode = options.get("method")
+        options["traffic_list"] = self._traffic_list
+        self.logger.debug(pformat(options))
+        if test_mode == "rfc2544_dichotomy":
+            cnt = 0
+            while cnt < self.max_retry:
+                try:
+                    result = self.send_rfc2544_throughput(options)
+                    if result:
+                        break
+                except Exception as e:
+                    msg = "failed to run rfc2544".format(cnt)
+                    self.logger.error(msg)
+                    self.logger.error(traceback.format_exc())
+                cnt += 1
+                msg = "No.{} rerun ixNetwork rfc2544".format(cnt)
+                self.logger.warning(msg)
+                time.sleep(10)
+            else:
+                result = []
+        else:
+            msg = "not support measurement {}".format(test_mode)
+            self.logger.error(msg)
+            self._result = None
+            return None
+        self.logger.info("measure <{}> completed".format(test_mode))
+        self.logger.info(result)
+        self._result = result
+        return result
+
+    def get_rfc2544_stat(self, port_list):
+        """
+        Get RX/TX packet statistics.
+        """
+        if not self._result:
+            return [0] * 3
+
+        result = self._result
+        _ixnet_stats = {}
+        for item in result:
+            port_id = int(item.get("Trial")) - 1
+            _ixnet_stats[port_id] = dict(item)
+        port_stat = _ixnet_stats.get(0, {})
+        rx_packets = float(port_stat.get("Agg Rx Count (frames)") or "0.0")
+        tx_packets = float(port_stat.get("Agg Tx Count (frames)") or "0.0")
+        rx_pps = float(port_stat.get("Agg Rx Throughput (fps)") or "0.0")
+        return tx_packets, rx_packets, rx_pps
+
+    def get_stats(self, ports, mode):
+        """
+        get statistics of custom mode
+        """
+        methods = {
+            "rfc2544": self.get_rfc2544_stat,
+        }
+        if mode not in list(methods.keys()):
+            msg = "not support mode <{0}>".format(mode)
+            raise Exception(msg)
+        # get custom mode stat
+        func = methods.get(mode)
+        stats = func(ports)
+
+        return stats
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 13/18] dts: merge DTS framework/ixia_network/ixnet.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (11 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 12/18] dts: merge DTS framework/ixia_network/__init__.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 14/18] dts: merge DTS framework/ixia_network/ixnet_config.py " Juraj Linkeš
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ixia_network/ixnet.py | 901 ++++++++++++++++++++++++++++
 1 file changed, 901 insertions(+)
 create mode 100644 dts/framework/ixia_network/ixnet.py

diff --git a/dts/framework/ixia_network/ixnet.py b/dts/framework/ixia_network/ixnet.py
new file mode 100644
index 0000000000..08aaf5687c
--- /dev/null
+++ b/dts/framework/ixia_network/ixnet.py
@@ -0,0 +1,901 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+"""
+This module implant from pei,yulong ixNetwork tool.
+"""
+
+import csv
+import json
+import os
+import re
+import time
+from collections import OrderedDict
+from datetime import datetime
+
+import requests
+
+from .ixnet_stream import IxnetConfigStream
+
+# local lib deps
+from .packet_parser import PacketParser
+
+
+class IxnetTrafficGenerator(object):
+    """ixNetwork Traffic Generator."""
+
+    json_header = {"content-type": "application/json"}
+
+    def __init__(self, config, logger):
+        # disable SSL warnings
+        requests.packages.urllib3.disable_warnings()
+        self.logger = logger
+        self.tg_ip = config.tg_ip
+        self.tg_ports = config.tg_ports
+        port = config.tg_ip_port or "11009"
+        # id will always be 1 when using windows api server
+        self.api_server = "http://{0}:{1}".format(self.tg_ip, port)
+        self.session = requests.session()
+        self.session_id = self.get_session_id(self.api_server)
+        self.session_url = "{0}/api/v1/sessions/{1}".format(
+            self.api_server, self.session_id
+        )
+        # initialize ixNetwork
+        self.new_blank_config()
+        self.tg_vports = self.assign_ports(self.tg_ports)
+        self.OUTPUT_DIR = None
+
+    def get_session_id(self, api_server):
+        url = "{server}/api/v1/sessions".format(server=api_server)
+        response = self.session.post(url, headers=self.json_header, verify=False)
+        session_id = response.json()["links"][0]["href"].split("/")[-1]
+        msg = "{0}: Session ID is {1}".format(api_server, session_id)
+        self.logger.info(msg)
+        return session_id
+
+    def destroy_config(self, name):
+        json_header = {
+            "content-type": "application/json",
+            "X-HTTP-Method-Override": "DELETE",
+        }
+        response = self.session.post(name, headers=json_header, verify=False)
+        return response
+
+    def __get_ports(self):
+        """Return available tg vports list"""
+        return self.tg_vports
+
+    def disable_port_misdirected(self):
+        msg = "close mismatched flag"
+        self.logger.debug(msg)
+        url = "{0}/ixnetwork/traffic".format(self.session_url)
+        data = {
+            "detectMisdirectedOnAllPorts": False,
+            "disablePortLevelMisdirected": True,
+        }
+        response = self.session.patch(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+
+    def delete_session(self):
+        """delete session after test done"""
+        try:
+            url = self.session_url
+            response = self.destroy_config(url)
+            self.logger.debug("STATUS CODE: %s" % response.status_code)
+        except requests.exceptions.RequestException as err_msg:
+            raise Exception("DELETE error: {0}\n".format(err_msg))
+
+    def configure_streams(self, pkt, field_config=None):
+        hParser = PacketParser()
+        hParser._parse_pcap(pkt)
+        hConfig = IxnetConfigStream(
+            hParser.packetLayers, field_config, hParser.framesize
+        )
+        return hConfig.ixnet_packet
+
+    def regenerate_trafficitems(self, trafficItemList):
+        """
+        Parameter
+            trafficItemList: ['/api/v1/sessions/1/ixnetwork/traffic/trafficItem/1', ...]
+        """
+        url = "{0}/ixnetwork/traffic/trafficItem/operations/generate".format(
+            self.session_url
+        )
+        data = {"arg1": trafficItemList}
+        self.logger.info("Regenerating traffic items: %s" % trafficItemList)
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        self.wait_for_complete(response, url + "/" + response.json()["id"])
+
+    def apply_traffic(self):
+        """Apply the configured traffic."""
+        url = "{0}/ixnetwork/traffic/operations/apply".format(self.session_url)
+        data = {"arg1": f"/api/v1/sessions/{self.session_id}/ixnetwork/traffic"}
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        self.wait_for_complete(response, url + "/" + response.json()["id"])
+
+    def start_traffic(self):
+        """start the configured traffic."""
+        self.logger.info("Traffic starting...")
+        url = "{0}/ixnetwork/traffic/operations/start".format(self.session_url)
+        data = {"arg1": f"/api/v1/sessions/{self.session_id}/ixnetwork/traffic"}
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        self.check_traffic_state(
+            expectedState=["started", "startedWaitingForStats"], timeout=45
+        )
+        self.logger.info("Traffic started Successfully.")
+
+    def stop_traffic(self):
+        """stop the configured traffic."""
+        url = "{0}/ixnetwork/traffic/operations/stop".format(self.session_url)
+        data = {"arg1": f"/api/v1/sessions/{self.session_id}/ixnetwork/traffic"}
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        self.check_traffic_state(expectedState=["stopped", "stoppedWaitingForStats"])
+        time.sleep(5)
+
+    def check_traffic_state(self, expectedState=["stopped"], timeout=45):
+        """
+        Description
+            Check the traffic state for the expected state.
+
+        Traffic states are:
+            startedWaitingForStats, startedWaitingForStreams, started, stopped,
+            stoppedWaitingForStats, txStopWatchExpected, locked, unapplied
+
+        Parameters
+            expectedState = Input a list of expected traffic state.
+                            Example: ['started', startedWaitingForStats']
+            timeout = The amount of seconds you want to wait for the expected traffic state.
+                      Defaults to 45 seconds.
+                      In a situation where you have more than 10 pages of stats, you will
+                      need to increase the timeout time.
+        """
+        if type(expectedState) != list:
+            expectedState.split(" ")
+
+        self.logger.info(
+            "check_traffic_state: expecting traffic state {0}".format(expectedState)
+        )
+        for counter in range(1, timeout + 1):
+            url = "{0}/ixnetwork/traffic".format(self.session_url)
+            response = self.session.get(url, headers=self.json_header, verify=False)
+            current_traffic_state = response.json()["state"]
+            self.logger.info(
+                "check_traffic_state: {trafficstate}: Waited {counter}/{timeout} seconds".format(
+                    trafficstate=current_traffic_state, counter=counter, timeout=timeout
+                )
+            )
+            if counter < timeout and current_traffic_state not in expectedState:
+                time.sleep(1)
+                continue
+            if counter < timeout and current_traffic_state in expectedState:
+                time.sleep(8)
+                self.logger.info(
+                    "check_traffic_state: got expected [ %s ], Done"
+                    % current_traffic_state
+                )
+                return 0
+
+        raise Exception(
+            "Traffic state did not reach the expected state (%s):" % expectedState
+        )
+
+    def _get_stats(
+        self, viewName="Flow Statistics", csvFile=None, csvEnableFileTimestamp=False
+    ):
+        """
+        sessionUrl: http://10.219.x.x:11009/api/v1/sessions/1/ixnetwork
+
+        csvFile = None or <filename.csv>.
+                  None will not create a CSV file.
+                  Provide a <filename>.csv to record all stats to a CSV file.
+                  Example: _get_stats(sessionUrl, csvFile='Flow_Statistics.csv')
+
+        csvEnableFileTimestamp = True or False. If True, timestamp will be appended to the filename.
+
+        viewName options (Not case sensitive):
+
+           'Port Statistics'
+           'Tx-Rx Frame Rate Statistics'
+           'Port CPU Statistics'
+           'Global Protocol Statistics'
+           'Protocols Summary'
+           'Port Summary'
+           'OSPFv2-RTR Drill Down'
+           'OSPFv2-RTR Per Port'
+           'IPv4 Drill Down'
+           'L2-L3 Test Summary Statistics'
+           'Flow Statistics'
+           'Traffic Item Statistics'
+           'IGMP Host Drill Down'
+           'IGMP Host Per Port'
+           'IPv6 Drill Down'
+           'MLD Host Drill Down'
+           'MLD Host Per Port'
+           'PIMv6 IF Drill Down'
+           'PIMv6 IF Per Port'
+
+        Note: Not all of the viewNames are listed here. You have to get the exact names from
+              the IxNetwork GUI in statistics based on your protocol(s).
+
+        Return you a dictionary of all the stats: statDict[rowNumber][columnName] == statValue
+          Get stats on row 2 for 'Tx Frames' = statDict[2]['Tx Frames']
+        """
+        url = "{0}/ixnetwork/statistics/view".format(self.session_url)
+        viewList = self.session.get(url, headers=self.json_header, verify=False)
+        views = ["{0}/{1}".format(url, str(i["id"])) for i in viewList.json()]
+
+        for view in views:
+            # GetAttribute
+            response = self.session.get(view, headers=self.json_header, verify=False)
+            if response.status_code != 200:
+                raise Exception("getStats: Failed: %s" % response.text)
+            captionMatch = re.match(viewName, response.json()["caption"], re.I)
+            if captionMatch:
+                # viewObj: sessionUrl + /statistics/view/11'
+                viewObj = view
+                break
+
+        self.logger.info("viewName: %s, %s" % (viewName, viewObj))
+
+        try:
+            response = self.session.patch(
+                viewObj,
+                data=json.dumps({"enabled": "true"}),
+                headers=self.json_header,
+                verify=False,
+            )
+        except Exception as e:
+            raise Exception("get_stats error: No stats available")
+
+        for counter in range(0, 31):
+            response = self.session.get(
+                viewObj + "/page", headers=self.json_header, verify=False
+            )
+            totalPages = response.json()["totalPages"]
+            if totalPages == "null":
+                self.logger.info(
+                    "Getting total pages is not ready yet. Waiting %d/30 seconds"
+                    % counter
+                )
+                time.sleep(1)
+            if totalPages != "null":
+                break
+            if totalPages == "null" and counter == 30:
+                raise Exception("getStats: failed to get total pages")
+
+        if csvFile is not None:
+            csvFileName = csvFile.replace(" ", "_")
+            if csvEnableFileTimestamp:
+                timestamp = datetime.now().strftime("%H%M%S")
+                if "." in csvFileName:
+                    csvFileNameTemp = csvFileName.split(".")[0]
+                    csvFileNameExtension = csvFileName.split(".")[1]
+                    csvFileName = (
+                        csvFileNameTemp + "_" + timestamp + "." + csvFileNameExtension
+                    )
+                else:
+                    csvFileName = csvFileName + "_" + timestamp
+
+            csvFile = open(csvFileName, "w")
+            csvWriteObj = csv.writer(csvFile)
+
+        # Get the stat column names
+        columnList = response.json()["columnCaptions"]
+        if csvFile is not None:
+            csvWriteObj.writerow(columnList)
+
+        statDict = {}
+        flowNumber = 1
+        # Get the stat values
+        for pageNumber in range(1, totalPages + 1):
+            self.session.patch(
+                viewObj + "/page",
+                data=json.dumps({"currentPage": pageNumber}),
+                headers=self.json_header,
+                verify=False,
+            )
+            response = self.session.get(
+                viewObj + "/page", headers=self.json_header, verify=False
+            )
+            statValueList = response.json()["pageValues"]
+            for statValue in statValueList:
+                if csvFile is not None:
+                    csvWriteObj.writerow(statValue[0])
+
+                self.logger.info("Row: %d" % flowNumber)
+                statDict[flowNumber] = {}
+                index = 0
+                for statValue in statValue[0]:
+                    statName = columnList[index]
+                    statDict[flowNumber].update({statName: statValue})
+                    self.logger.info("%s: %s" % (statName, statValue))
+                    index += 1
+                flowNumber += 1
+
+        if csvFile is not None:
+            csvFile.close()
+        return statDict
+        # Flow Statistics dictionary output example
+        """
+        Flow: 50
+            Tx Port: Ethernet - 002
+            Rx Port: Ethernet - 001
+            Traffic Item: OSPF T1 to T2
+            Source/Dest Value Pair: 2.0.21.1-1.0.21.1
+            Flow Group: OSPF T1 to T2-FlowGroup-1 - Flow Group 0002
+            Tx Frames: 35873
+            Rx Frames: 35873
+            Frames Delta: 0
+            Loss %: 0
+            Tx Frame Rate: 3643.5
+            Rx Frame Rate: 3643.5
+            Tx L1 Rate (bps): 4313904
+            Rx L1 Rate (bps): 4313904
+            Rx Bytes: 4591744
+            Tx Rate (Bps): 466368
+            Rx Rate (Bps): 466368
+            Tx Rate (bps): 3730944
+            Rx Rate (bps): 3730944
+            Tx Rate (Kbps): 3730.944
+            Rx Rate (Kbps): 3730.944
+            Tx Rate (Mbps): 3.731
+            Rx Rate (Mbps): 3.731
+            Store-Forward Avg Latency (ns): 0
+            Store-Forward Min Latency (ns): 0
+            Store-Forward Max Latency (ns): 0
+            First TimeStamp: 00:00:00.722
+            Last TimeStamp: 00:00:10.568
+        """
+
+    def new_blank_config(self):
+        """
+        Start a new blank configuration.
+        """
+        url = "{0}/ixnetwork/operations/newconfig".format(self.session_url)
+        self.logger.info("newBlankConfig: %s" % url)
+        response = self.session.post(url, verify=False)
+        url = "{0}/{1}".format(url, response.json()["id"])
+        self.wait_for_complete(response, url)
+
+    def wait_for_complete(self, response="", url="", timeout=120):
+        """
+        Wait for an operation progress to complete.
+        response: The POST action response.
+        """
+        if response.json() == "" and response.json()["state"] == "SUCCESS":
+            self.logger.info("State: SUCCESS")
+            return
+
+        if response.json() == []:
+            raise Exception("waitForComplete: response is empty.")
+
+        if "errors" in response.json():
+            raise Exception(response.json()["errors"][0])
+
+        if response.json()["state"] in ["ERROR", "EXCEPTION"]:
+            raise Exception(
+                "WaitForComplete: STATE=%s: %s"
+                % (response.json()["state"], response.text)
+            )
+
+        self.logger.info("%s" % url)
+        self.logger.info("State: %s" % (response.json()["state"]))
+        while (
+            response.json()["state"] == "IN_PROGRESS"
+            or response.json()["state"] == "down"
+        ):
+            if timeout == 0:
+                raise Exception("%s" % response.text)
+            time.sleep(1)
+            response = self.session.get(url, headers=self.json_header, verify=False)
+            self.logger.info("State: %s" % (response.json()["state"]))
+            if response.json()["state"] == "SUCCESS":
+                return
+            timeout = timeout - 1
+
+    def create_vports(self, portList=None, rawTrafficVport=True):
+        """
+        This creates virtual ports based on a portList.
+        portList:  Pass in a list of ports in the format of ixChassisIp, slotNumber, portNumber
+          portList = [[ixChassisIp, '1', '1'],
+                      [ixChassisIp, '2', '1']]
+        rawTrafficVport = For raw Traffic Item src/dest endpoints, vports must be in format:
+                               /api/v1/sessions1/vport/{id}/protocols
+        Next step is to call assign_port.
+        Return: A list of vports
+        """
+        createdVportList = []
+        for index in range(0, len(portList)):
+            url = "{0}/ixnetwork/vport".format(self.session_url)
+
+            card = portList[index][1]
+            port = portList[index][2]
+            portNumber = str(card) + "/" + str(port)
+            self.logger.info("Name: %s" % portNumber)
+            data = {"name": portNumber}
+            response = self.session.post(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+            vportObj = response.json()["links"][0]["href"]
+            self.logger.info("createVports: %s" % vportObj)
+            if rawTrafficVport:
+                createdVportList.append(vportObj + "/protocols")
+            else:
+                createdVportList.append(vportObj)
+
+        if createdVportList == []:
+            raise Exception("No vports created")
+
+        self.logger.info("createVports: %s" % createdVportList)
+        return createdVportList
+
+    def assign_ports(self, portList, createVports=True, rawTraffic=True, timeout=90):
+        """
+        Description
+            Use this to assign physical ports to the virtual ports.
+
+        Parameters
+            portList: [ [ixChassisIp, '1','1'], [ixChassisIp, '1','2'] ]
+            vportList: list return by create_vports.
+            timeout: Timeout for port up.
+
+        Syntaxes
+            POST: http://{apiServerIp:port}/api/v1/sessions/{id}/ixnetwork/operations/assignports
+                  data={arg1: [{arg1: ixChassisIp, arg2: 1, arg3: 1}, {arg1: ixChassisIp, arg2: 1, arg3: 2}],
+                        arg2: [],
+                        arg3: ['/api/v1/sessions/{1}/ixnetwork/vport/1',
+                               '/api/v1/sessions/{1}/ixnetwork/vport/2'],
+                        arg4: true}  <-- True will clear port ownership
+                  headers={'content-type': 'application/json'}
+            GET:  http://{apiServerIp:port}/api/v1/sessions/{id}/ixnetwork/operations/assignports/1
+                  data={}
+                  headers={}
+            Expecting:   RESPONSE:  SUCCESS
+        """
+        if createVports:
+            vportList = self.create_vports(portList, rawTrafficVport=False)
+        url = "{0}/ixnetwork/operations/assignports".format(self.session_url)
+        data = {"arg1": [], "arg2": [], "arg3": vportList, "arg4": "true"}
+        [
+            data["arg1"].append(
+                {"arg1": str(chassis), "arg2": str(card), "arg3": str(port)}
+            )
+            for chassis, card, port in portList
+        ]
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        self.logger.info("%s" % response.json())
+        url = "{0}/{1}".format(url, response.json()["id"])
+        self.wait_for_complete(response, url)
+
+        for vport in vportList:
+            url = "{0}{1}/l1Config".format(self.api_server, vport)
+            response = self.session.get(url, headers=self.json_header, verify=False)
+            url = url + "/" + response.json()["currentType"]
+            data = {"enabledFlowControl": False}
+            response = self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+        if rawTraffic:
+            vportList_protocol = []
+            for vport in vportList:
+                vportList_protocol.append(vport + "/protocols")
+            self.logger.info("vports: %s" % vportList_protocol)
+            return vportList_protocol
+        else:
+            self.logger.info("vports: %s" % vportList)
+            return vportList
+
+    def destroy_assign_ports(self, vportList):
+        msg = "release {}".format(vportList)
+        self.logger.info(msg)
+        for vport_url in vportList:
+            url = self.api_server + "/".join(vport_url.split("/")[:-1])
+            self.destroy_config(url)
+
+    def config_config_elements(self, config_element_obj, config_elements):
+        """
+        Parameters
+        config_element_obj: /api/v1/sessions/1/ixnetwork/traffic/trafficItem/{id}/configElement/{id}
+        """
+        url = self.api_server + config_element_obj + "/transmissionControl"
+        if "transmissionType" in config_elements:
+            data = {"type": config_elements["transmissionType"]}
+            self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+        if "burstPacketCount" in config_elements:
+            data = {"burstPacketCount": int(config_elements["burstPacketCount"])}
+            self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+        if "frameCount" in config_elements:
+            data = {"frameCount": int(config_elements["frameCount"])}
+            self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+        if "duration" in config_elements:
+            data = {"duration": int(config_elements["duration"])}
+            self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+        url = self.api_server + config_element_obj + "/frameRate"
+        if "frameRate" in config_elements:
+            data = {"rate": int(config_elements["frameRate"])}
+            self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+        if "frameRateType" in config_elements:
+            data = {"type": config_elements["frameRateType"]}
+            self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+        url = self.api_server + config_element_obj + "/frameSize"
+        if "frameSize" in config_elements:
+            data = {"fixedSize": int(config_elements["frameSize"])}
+            self.session.patch(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+
+    def import_json_config_obj(self, data_obj):
+        """
+        Parameter
+            data_obj: The JSON config object.
+        Note
+            arg2 value must be a string of JSON data: '{"xpath": "/traffic/trafficItem[1]", "enabled": false}'
+        """
+        data = {
+            "arg1": "/api/v1/sessions/1/ixnetwork/resourceManager",
+            "arg2": json.dumps(data_obj),
+            "arg3": False,
+        }
+        url = "{0}/ixnetwork/resourceManager/operations/importconfig".format(
+            self.session_url
+        )
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        url = "{0}/{1}".format(url, response.json()["id"])
+        self.wait_for_complete(response, url)
+
+    def send_rfc2544_throughput(self, options):
+        """Send traffic per RFC2544 throughput test specifications.
+        Send packets at a variable rate, using ``traffic_list`` configuration,
+        until minimum rate at which no packet loss is detected is found.
+        """
+        # new added parameters
+        duration = options.get("duration") or 10
+        initialBinaryLoadRate = max_rate = options.get("max_rate") or 100.0
+        min_rate = options.get("min_rate") or 0.0
+        accuracy = options.get("accuracy") or 0.001
+        permit_loss_rate = options.get("pdr") or 0.0
+        # old parameters
+        traffic_list = options.get("traffic_list")
+        if traffic_list is None:
+            raise Exception("traffic_list is empty.")
+
+        # close port mismatched statistics
+        self.disable_port_misdirected()
+
+        url = "{0}/ixnetwork/traffic/trafficItem".format(self.session_url)
+        response = self.session.get(url, headers=self.json_header, verify=False)
+        if response.json() != []:
+            for item in response.json():
+                url = "{0}{1}".format(self.api_server, item["links"][0]["href"])
+                response = self.destroy_config(url)
+                if response.status_code != 200:
+                    raise Exception("remove trafficitem failed")
+
+        trafficitem_list = []
+        index = 0
+        for traffic in traffic_list:
+            index = index + 1
+            # create trafficitem
+            url = "{0}/ixnetwork/traffic/trafficItem".format(self.session_url)
+            data = {"name": "Traffic Item " + str(index), "trafficType": "raw"}
+            response = self.session.post(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+            trafficitem_obj = response.json()["links"][0]["href"]
+            self.logger.info("create traffic item: %s" % trafficitem_obj)
+            trafficitem_list.append(trafficitem_obj)
+            # create endpointset
+            url = "{0}{1}/endpointSet".format(self.api_server, trafficitem_obj)
+            data = {"sources": [traffic[0]], "destinations": [traffic[1]]}
+            response = self.session.post(
+                url, data=json.dumps(data), headers=self.json_header, verify=False
+            )
+            # packet config
+            config_stack_obj = eval(
+                str(traffic[2]).replace(
+                    "trafficItem[1]", "trafficItem[" + str(index) + "]"
+                )
+            )
+            self.import_json_config_obj(config_stack_obj)
+            # get framesize
+            url = "{0}{1}/configElement/1/frameSize".format(
+                self.api_server, trafficitem_obj
+            )
+            response = self.session.get(url, headers=self.json_header, verify=False)
+            frame_size = response.json()["fixedSize"]
+
+        self.regenerate_trafficitems(trafficitem_list)
+
+        # query existing quick test
+        url = "{0}/ixnetwork/quickTest/rfc2544throughput".format(self.session_url)
+        response = self.session.get(url, headers=self.json_header, verify=False)
+        if response.json() != []:
+            for qt in response.json():
+                url = "{0}{1}".format(self.api_server, qt["links"][0]["href"])
+                response = self.destroy_config(url)
+                if response.status_code != 200:
+                    raise Exception("remove quick test failed")
+        # create quick test
+        url = "{0}/ixnetwork/quickTest/rfc2544throughput".format(self.session_url)
+        data = [{"name": "QuickTest1", "mode": "existingMode"}]
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        quicktest_obj = response.json()["links"][0]["href"]
+        self.logger.info("create quick test: %s" % quicktest_obj)
+        # add trafficitems
+        url = "{0}{1}/trafficSelection".format(self.api_server, quicktest_obj)
+        data = [{"__id__": item_obj} for item_obj in trafficitem_list]
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        self.logger.info("add traffic item status: %s" % response.content)
+        # modify quick test config
+        url = "{0}{1}/testConfig".format(self.api_server, quicktest_obj)
+        data = {
+            # If Enabled, The minimum size of the frame is used .
+            "enableMinFrameSize": True,
+            # This attribute is the frame size mode for the Quad Gaussian.
+            # Possible values includes:
+            "frameSizeMode": "custom",
+            # The list of the available frame size.
+            "framesizeList": [str(frame_size)],
+            # The minimum delay between successive packets.
+            "txDelay": 5,
+            # Specifies the amount of delay after every transmit
+            "delayAfterTransmit": 5,
+            # sec
+            "duration": duration,
+            # The initial binary value of the load rate
+            "initialBinaryLoadRate": initialBinaryLoadRate,
+            # The upper bound of the iteration rates for each frame size during
+            # a binary search
+            "maxBinaryLoadRate": max_rate,
+            # Specifies the minimum rate of the binary algorithm.
+            "minBinaryLoadRate": min_rate,
+            # The frame loss unit for traffic in binary.
+            # Specifies the resolution of the iteration. The difference between
+            # the real rate transmission in two consecutive iterations, expressed
+            # as a percentage, is compared with the resolution value. When the
+            # difference is smaller than the value specified for the
+            # resolution, the test stops .
+            "resolution": accuracy * 100,
+            # The load unit value in binary.
+            "binaryFrameLossUnit": "%",
+            # The binary tolerance level.
+            "binaryTolerance": permit_loss_rate,
+        }
+        response = self.session.patch(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        if response.status_code != 200:
+            raise Exception("change quick test config failed")
+        # run the quick test
+        url = "{0}{1}/operations/run".format(self.api_server, quicktest_obj)
+        data = {"arg1": quicktest_obj, "arg2": ""}
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+        url = url + "/" + response.json()["id"]
+        state = response.json()["state"]
+        self.logger.info("Quicktest State: %s" % state)
+        while state == "IN_PROGRESS":
+            response = self.session.get(url, headers=self.json_header, verify=False)
+            state = response.json()["state"]
+            self.logger.info("Quicktest State: %s" % state)
+            time.sleep(5)
+
+        timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
+        copy_to_path = os.sep.join(
+            [self.OUTPUT_DIR, "ixnet" + datetime.now().strftime("%Y%m%d_%H%M%S")]
+        )
+        if not os.path.exists(copy_to_path):
+            os.makedirs(copy_to_path)
+        self.get_quicktest_csvfiles(quicktest_obj, copy_to_path, csvfile="all")
+        qt_result_csv = "{0}/AggregateResults.csv".format(copy_to_path)
+        return self.parse_quicktest_results(qt_result_csv)
+
+    def parse_quicktest_results(self, path_file):
+        """parse csv filte and return quicktest result"""
+        results = OrderedDict()
+
+        if not os.path.exists(path_file):
+            msg = "failed to get result file from windows api server"
+            self.logger.error(msg)
+            return results
+
+        ret_result = []
+        with open(path_file, "r") as f:
+            qt_result = csv.DictReader(f)
+            for row in qt_result:
+                ret_result.append(row)
+                results["framesize"] = row["Framesize"]
+                results["throughput"] = row["Agg Rx Throughput (fps)"]
+                results["linerate%"] = row["Agg Rx Throughput (% Line Rate)"]
+                results["min_latency"] = row["Min Latency (ns)"]
+                results["max_latency"] = row["Max Latency (ns)"]
+                results["avg_latency"] = row["Avg Latency (ns)"]
+
+        return ret_result
+
+    def get_quicktest_resultpath(self, quicktest_obj):
+        """
+        quicktest_obj = /api/v1/sessions/1/ixnetwork/quickTest/rfc2544throughput/2
+        """
+        url = "{0}{1}/results".format(self.api_server, quicktest_obj)
+        response = self.session.get(url, headers=self.json_header, verify=False)
+        return response.json()["resultPath"]
+
+    def get_quicktest_csvfiles(self, quicktest_obj, copy_to_path, csvfile="all"):
+        """
+        Description
+            Copy Quick Test CSV result files to a specified path on either Windows or Linux.
+            Note: Currently only supports copying from Windows.
+        quicktest_obj: The Quick Test handle.
+        copy_to_path: The destination path to copy to.
+                    If copy to Windows: c:\\Results\\Path
+                    If copy to Linux: /home/user1/results/path
+        csvfile: A list of CSV files to get: 'all', one or more CSV files to get:
+                 AggregateResults.csv, iteration.csv, results.csv, logFile.txt, portMap.csv
+        """
+        results_path = self.get_quicktest_resultpath(quicktest_obj)
+        self.logger.info("get_quickTest_csvfiles: %s" % results_path)
+        if csvfile == "all":
+            get_csv_files = [
+                "AggregateResults.csv",
+                "iteration.csv",
+                "results.csv",
+                "logFile.txt",
+                "portMap.csv",
+            ]
+        else:
+            if type(csvfile) is not list:
+                get_csv_files = [csvfile]
+            else:
+                get_csv_files = csvfile
+
+        for each_csvfile in get_csv_files:
+            # Backslash indicates the results resides on a Windows OS.
+            if "\\" in results_path:
+                cnt = 0
+                while cnt < 5:
+                    try:
+                        self.copyfile_windows2linux(
+                            results_path + "\\{0}".format(each_csvfile), copy_to_path
+                        )
+                        break
+                    except Exception as e:
+                        time.sleep(5)
+                        cnt += 1
+                        msg = "No.{} retry to get result from windows".format(cnt)
+                        self.logger.warning(msg)
+                        continue
+            else:
+                # TODO:Copy from Linux to Windows and Linux to Linux.
+                pass
+
+    def copyfile_windows2linux(self, winPathFile, linuxPath, includeTimestamp=False):
+        """
+        Description
+            Copy files from the IxNetwork API Server c: drive to local Linux filesystem.
+            You could also include a timestamp for the destination file.
+        Parameters
+            winPathFile: (str): The full path and filename to retrieve from Windows client.
+            linuxPath: (str): The Linux destination path to put the file to.
+            includeTimestamp: (bool):  If False, each time you copy the same file will be overwritten.
+        Syntax
+            post: /api/v1/sessions/1/ixnetwork/operations/copyfile
+            data: {'arg1': winPathFile, 'arg2': '/api/v1/sessions/1/ixnetwork/files/'+fileName'}
+        """
+        self.logger.info("copyfile From: %s to %s" % (winPathFile, linuxPath))
+        fileName = winPathFile.split("\\")[-1]
+        fileName = fileName.replace(" ", "_")
+        destinationPath = "/api/v1/sessions/1/ixnetwork/files/" + fileName
+        currentTimestamp = datetime.now().strftime("%H%M%S")
+
+        # Step 1 of 2:
+        url = "{0}/ixnetwork/operations/copyfile".format(self.session_url)
+        data = {"arg1": winPathFile, "arg2": destinationPath}
+        response = self.session.post(
+            url, data=json.dumps(data), headers=self.json_header, verify=False
+        )
+
+        # Step 2 of 2:
+        url = "{0}/ixnetwork/files/{1}".format(self.session_url, fileName)
+        requestStatus = self.session.get(
+            url, stream=True, headers=self.json_header, verify=False
+        )
+        if requestStatus.status_code == 200:
+            contents = requestStatus.raw.read()
+
+            if includeTimestamp:
+                tempFileName = fileName.split(".")
+                if len(tempFileName) > 1:
+                    extension = fileName.split(".")[-1]
+                    fileName = (
+                        tempFileName[0] + "_" + currentTimestamp + "." + extension
+                    )
+                else:
+                    fileName = tempFileName[0] + "_" + currentTimestamp
+
+                linuxPath = linuxPath + "/" + fileName
+            else:
+                linuxPath = linuxPath + "/" + fileName
+
+            with open(linuxPath, "wb") as downloadedFileContents:
+                downloadedFileContents.write(contents)
+
+            url = "{0}/ixnetwork/files".format(self.session_url)
+            response = self.session.get(url, headers=self.json_header, verify=False)
+            self.logger.info("A copy of saved file is in: %s" % (winPathFile))
+            self.logger.info(
+                "copyfile_windows2linux: The copyfile is in %s" % linuxPath
+            )
+        else:
+            raise Exception(
+                "copyfile_windows2linux: Failed to download file from IxNetwork API Server."
+            )
+
+    def tear_down(self):
+        """do needed clean up"""
+        self.destroy_assign_ports(self.tg_vports)
+        self.session.close()
-- 
2.20.1



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 14/18] dts: merge DTS framework/ixia_network/ixnet_config.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (12 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 13/18] dts: merge DTS framework/ixia_network/ixnet.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 15/18] dts: merge DTS framework/ixia_network/ixnet_stream.py " Juraj Linkeš
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ixia_network/ixnet_config.py | 42 ++++++++++++++++++++++
 1 file changed, 42 insertions(+)
 create mode 100644 dts/framework/ixia_network/ixnet_config.py

diff --git a/dts/framework/ixia_network/ixnet_config.py b/dts/framework/ixia_network/ixnet_config.py
new file mode 100644
index 0000000000..5c6aea467f
--- /dev/null
+++ b/dts/framework/ixia_network/ixnet_config.py
@@ -0,0 +1,42 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+"""
+Misc functions.
+"""
+
+from typing import List, NamedTuple
+
+
+class IxiaNetworkConfig(NamedTuple):
+    ixia_ip: str
+    tg_ip: str
+    tg_ip_port: str
+    tg_ports: List
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 15/18] dts: merge DTS framework/ixia_network/ixnet_stream.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (13 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 14/18] dts: merge DTS framework/ixia_network/ixnet_config.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 16/18] dts: merge DTS framework/ixia_network/packet_parser.py " Juraj Linkeš
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ixia_network/ixnet_stream.py | 366 +++++++++++++++++++++
 1 file changed, 366 insertions(+)
 create mode 100644 dts/framework/ixia_network/ixnet_stream.py

diff --git a/dts/framework/ixia_network/ixnet_stream.py b/dts/framework/ixia_network/ixnet_stream.py
new file mode 100644
index 0000000000..d684530540
--- /dev/null
+++ b/dts/framework/ixia_network/ixnet_stream.py
@@ -0,0 +1,366 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+import json
+import os
+
+from framework.utils import convert_int2ip, convert_ip2int
+
+
+class IxnetConfigStream(object):
+    def __init__(
+        self,
+        packetLayers,
+        field_config=None,
+        frame_size=64,
+        trafficItem=1,
+        configElement=1,
+    ):
+        self.traffic_item_id = f"trafficItem[{trafficItem}]"
+        self.config_element_id = f"configElement[{configElement}]"
+
+        self.packetLayers = packetLayers
+        self.layer_names = [name for name in packetLayers]
+        self.field_config = field_config or {}
+        self.frame_size = frame_size
+
+    def action_key(self, action):
+        if not action:
+            msg = "action not set !!!"
+            print(msg)
+
+        ret = {
+            "inc": "increment",
+            "dec": "decrement",
+        }.get(action or "inc")
+
+        if ret:
+            msg = f"action <{action}> not supported, using increment action now"
+            print(msg)
+
+        return ret or "increment"
+
+    @property
+    def ethernet(self):
+        layer_name = "Ethernet"
+        default_config = self.packetLayers.get(layer_name)
+
+        index = self.layer_names.index(layer_name) + 1
+        tag = f"{layer_name.lower()}-{index}"
+
+        src_mac = default_config.get("src")
+        dst_mac = default_config.get("dst")
+        # mac src config
+        src_config = {"singleValue": src_mac}
+        src_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'ethernet.header.sourceAddress-2']"
+        # mac dst config
+        dst_config = {"singleValue": dst_mac}
+        dst_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'ethernet.header.destinationAddress-1']"
+        # ixNetwork stream configuration table
+        element = {
+            "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']",
+            "field": [
+                src_config,
+                dst_config,
+            ],
+        }
+        return element
+
+    @property
+    def ip(self):
+        layer_name = "IP"
+        default_config = self.packetLayers.get(layer_name)
+        vm_config = self.field_config.get(layer_name.lower()) or {}
+
+        index = self.layer_names.index(layer_name) + 1
+        tag = f"ipv4-{index}"
+
+        src_ip = default_config.get("src")
+        dst_ip = default_config.get("dst")
+
+        # ip src config
+        ip_src_vm = vm_config.get("src", {})
+        start_ip = ip_src_vm.get("start") or src_ip
+        end_ip = ip_src_vm.get("end") or "255.255.255.255"
+        src_config = (
+            {
+                "startValue": start_ip,
+                "stepValue": convert_int2ip(ip_src_vm.get("step")) or "0.0.0.1",
+                "countValue": str(
+                    abs(convert_ip2int(end_ip) - convert_ip2int(start_ip)) + 1
+                ),
+                "valueType": self.action_key(ip_src_vm.get("action")),
+            }
+            if ip_src_vm
+            else {"singleValue": src_ip}
+        )
+        src_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'ipv4.header.srcIp-27']"
+        # ip dst config
+        ip_dst_vm = vm_config.get("dst", {})
+        start_ip = ip_dst_vm.get("start") or dst_ip
+        end_ip = ip_dst_vm.get("end") or "255.255.255.255"
+        dst_config = (
+            {
+                "startValue": start_ip,
+                "stepValue": convert_int2ip(ip_dst_vm.get("step")) or "0.0.0.1",
+                "countValue": str(
+                    abs(convert_ip2int(end_ip) - convert_ip2int(start_ip)) + 1
+                ),
+                "valueType": self.action_key(ip_dst_vm.get("action")),
+            }
+            if ip_dst_vm
+            else {"singleValue": dst_ip}
+        )
+        dst_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'ipv4.header.dstIp-28']"
+        # ixNetwork stream configuration table
+        element = {
+            "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']",
+            "field": [
+                src_config,
+                dst_config,
+            ],
+        }
+        return element
+
+    @property
+    def ipv6(self):
+        layer_name = "IPv6"
+        default_config = self.packetLayers.get(layer_name)
+        vm_config = self.field_config.get(layer_name.lower()) or {}
+
+        index = self.layer_names.index(layer_name) + 1
+        tag = f"{layer_name.lower()}-{index}"
+
+        src_ip = default_config.get("src")
+        dst_ip = default_config.get("dst")
+        # ip src config
+        ip_src_vm = vm_config.get("src", {})
+        start_ip = ip_src_vm.get("start") or src_ip
+        end_ip = ip_src_vm.get("end") or "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff"
+        src_config = (
+            {
+                "startValue": start_ip,
+                "stepValue": convert_int2ip(ip_src_vm.get("step"), ip_type=6)
+                or "0:0:0:0:0:0:0:1",
+                "countValue": str(
+                    min(
+                        abs(
+                            convert_ip2int(end_ip, ip_type=6)
+                            - convert_ip2int(start_ip, ip_type=6)
+                        )
+                        + 1,
+                        2147483647,
+                    )
+                ),
+                "valueType": self.action_key(ip_src_vm.get("action")),
+            }
+            if ip_src_vm
+            else {"singleValue": src_ip}
+        )
+        header_src = "srcIP-7"
+        src_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'ipv6.header.{header_src}']"
+        # ip dst config
+        ip_dst_vm = vm_config.get("dst", {})
+        start_ip = ip_dst_vm.get("start") or dst_ip
+        end_ip = ip_dst_vm.get("end") or "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff"
+        dst_config = (
+            {
+                "startValue": start_ip,
+                "stepValue": convert_int2ip(ip_dst_vm.get("step"), ip_type=6)
+                or "0:0:0:0:0:0:0:1",
+                "countValue": str(
+                    min(
+                        abs(
+                            convert_ip2int(end_ip, ip_type=6)
+                            - convert_ip2int(start_ip, ip_type=6)
+                        )
+                        + 1,
+                        2147483647,
+                    )
+                ),
+                "valueType": self.action_key(ip_dst_vm.get("action")),
+            }
+            if ip_dst_vm
+            else {"singleValue": dst_ip}
+        )
+        header_dst = "dstIP-8"
+        dst_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'ipv6.header.{header_dst}']"
+        # ixNetwork stream configuration table
+        element = {
+            "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']",
+            "field": [
+                src_config,
+                dst_config,
+            ],
+        }
+        return element
+
+    @property
+    def udp(self):
+        layer_name = "UDP"
+        default_config = self.packetLayers.get(layer_name)
+
+        index = self.layer_names.index(layer_name) + 1
+        tag = f"{layer_name.lower()}-{index}"
+
+        sport = default_config.get("sport")
+        dport = default_config.get("dport")
+        # udp src config
+        src_config = {"singleValue": str(sport)}
+        header_src = "srcPort-1"
+        src_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'udp.header.{header_src}']"
+        # udp dst config
+        dst_config = {"singleValue": str(dport)}
+        header_dst = "dstPort-2"
+        dst_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'udp.header.{header_dst}']"
+        # ixNetwork stream configuration table
+        element = {
+            "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']",
+            "field": [
+                src_config,
+                dst_config,
+            ],
+        }
+
+        return element
+
+    @property
+    def tcp(self):
+        layer_name = "TCP"
+        default_config = self.packetLayers.get(layer_name)
+
+        index = self.layer_names.index(layer_name) + 1
+        tag = f"{layer_name.lower()}-{index}"
+
+        sport = default_config.get("sport")
+        dport = default_config.get("dport")
+        # tcp src config
+        src_config = {"singleValue": str(sport)}
+        header_src = "srcPort-1"
+        src_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'tcp.header.{header_src}']"
+        # tcp dst config
+        dst_config = {"singleValue": str(dport)}
+        header_dst = "dstPort-2"
+        dst_config[
+            "xpath"
+        ] = f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']/field[@alias = 'tcp.header.{header_dst}']"
+        # ixNetwork stream configuration table
+        element = {
+            "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}/stack[@alias = '{tag}']",
+            "field": [
+                src_config,
+                dst_config,
+            ],
+        }
+
+        return element
+
+    @property
+    def framePayload(self):
+        element = {
+            "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}/framePayload",
+            "type": "incrementByte",
+            "customRepeat": "true",
+            "customPattern": "",
+        }
+        return element
+
+    @property
+    def stack(self):
+        element = [
+            getattr(self, name.lower())
+            for name in self.packetLayers
+            if name.lower() != "raw"
+        ]
+        return element
+
+    @property
+    def frameSize(self):
+        element = {
+            "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}/frameSize",
+            "fixedSize": self.frame_size,
+        }
+        return element
+
+    @property
+    def configElement(self):
+        element = [
+            {
+                "xpath": f"/traffic/{self.traffic_item_id}/{self.config_element_id}",
+                "stack": self.stack,
+                "frameSize": self.frameSize,
+                "framePayload": self.framePayload,
+            }
+        ]
+        return element
+
+    @property
+    def trafficItem(self):
+        element = [
+            {
+                "xpath": f"/traffic/{self.traffic_item_id}",
+                "configElement": self.configElement,
+            }
+        ]
+        return element
+
+    @property
+    def traffic(self):
+        element = {
+            "xpath": "/traffic",
+            "trafficItem": self.trafficItem,
+        }
+        return element
+
+    @property
+    def ixnet_packet(self):
+        element = {
+            "xpath": "/",
+            "traffic": self.traffic,
+        }
+        return element
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 16/18] dts: merge DTS framework/ixia_network/packet_parser.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (14 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 15/18] dts: merge DTS framework/ixia_network/ixnet_stream.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 17/18] dts: merge DTS nics/__init__.py " Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 18/18] dts: merge DTS nics/net_device.py " Juraj Linkeš
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/framework/ixia_network/packet_parser.py | 96 +++++++++++++++++++++
 1 file changed, 96 insertions(+)
 create mode 100644 dts/framework/ixia_network/packet_parser.py

diff --git a/dts/framework/ixia_network/packet_parser.py b/dts/framework/ixia_network/packet_parser.py
new file mode 100644
index 0000000000..25e18f2e18
--- /dev/null
+++ b/dts/framework/ixia_network/packet_parser.py
@@ -0,0 +1,96 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+import os
+from collections import OrderedDict
+
+from scapy.all import conf
+from scapy.fields import ConditionalField
+from scapy.packet import NoPayload
+from scapy.packet import Packet as scapyPacket
+from scapy.utils import rdpcap
+
+
+class PacketParser(object):
+    """parse packet full layers information"""
+
+    def __init__(self):
+        self.packetLayers = OrderedDict()
+        self.framesize = 64
+
+    def _parse_packet_layer(self, pkt_object):
+        """parse one packet every layers' fields and value"""
+        if pkt_object is None:
+            return
+
+        self.packetLayers[pkt_object.name] = OrderedDict()
+        for curfield in pkt_object.fields_desc:
+            if isinstance(curfield, ConditionalField) and not curfield._evalcond(
+                pkt_object
+            ):
+                continue
+            field_value = pkt_object.getfieldval(curfield.name)
+            if isinstance(field_value, scapyPacket) or (
+                curfield.islist and curfield.holds_packets and type(field_value) is list
+            ):
+                continue
+            repr_value = curfield.i2repr(pkt_object, field_value)
+            if isinstance(repr_value, str):
+                repr_value = repr_value.replace(
+                    os.linesep, os.linesep + " " * (len(curfield.name) + 4)
+                )
+            self.packetLayers[pkt_object.name][curfield.name] = repr_value
+
+        if isinstance(pkt_object.payload, NoPayload):
+            return
+        else:
+            self._parse_packet_layer(pkt_object.payload)
+
+    def _parse_pcap(self, pcapFile, number=0):
+        """parse one packet content"""
+        self.packetLayers = OrderedDict()
+        pcap_pkts = []
+        if isinstance(pcapFile, str):
+            if os.path.exists(pcapFile) is False:
+                warning = "{0} is not exist !".format(pcapFile)
+                raise Exception(warning)
+            pcap_pkts = rdpcap(pcapFile)
+        else:
+            pcap_pkts = pcapFile
+        # parse packets' every layers and fields
+        if len(pcap_pkts) == 0:
+            warning = "{0} is empty".format(pcapFile)
+            raise Exception(warning)
+        elif number >= len(pcap_pkts):
+            warning = "{0} is missing No.{1} packet".format(pcapFile, number)
+            raise Exception(warning)
+        else:
+            self._parse_packet_layer(pcap_pkts[number])
+            self.framesize = len(pcap_pkts[number]) + 4
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 17/18] dts: merge DTS nics/__init__.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (15 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 16/18] dts: merge DTS framework/ixia_network/packet_parser.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  2022-04-06 15:04 ` [RFC PATCH v1 18/18] dts: merge DTS nics/net_device.py " Juraj Linkeš
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/nics/__init__.py | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)
 create mode 100644 dts/nics/__init__.py

diff --git a/dts/nics/__init__.py b/dts/nics/__init__.py
new file mode 100644
index 0000000000..ae0043b7ef
--- /dev/null
+++ b/dts/nics/__init__.py
@@ -0,0 +1,30 @@
+#!/usr/bin/python3
+# BSD LICENSE
+#
+# Copyright (c) 2021 PANTHEON.tech s.r.o.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of PANTHEON.tech s.r.o. nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v1 18/18] dts: merge DTS nics/net_device.py to DPDK
  2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
                   ` (16 preceding siblings ...)
  2022-04-06 15:04 ` [RFC PATCH v1 17/18] dts: merge DTS nics/__init__.py " Juraj Linkeš
@ 2022-04-06 15:04 ` Juraj Linkeš
  17 siblings, 0 replies; 19+ messages in thread
From: Juraj Linkeš @ 2022-04-06 15:04 UTC (permalink / raw)
  To: thomas, david.marchand, Honnappa.Nagarahalli, ohilyard, lijuan.tu
  Cc: dev, Juraj Linkeš

---
 dts/nics/net_device.py | 1013 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1013 insertions(+)
 create mode 100644 dts/nics/net_device.py

diff --git a/dts/nics/net_device.py b/dts/nics/net_device.py
new file mode 100644
index 0000000000..4ef755e055
--- /dev/null
+++ b/dts/nics/net_device.py
@@ -0,0 +1,1013 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+import os
+import re
+import time
+from functools import wraps
+
+import framework.settings as settings
+from framework.crb import Crb
+from framework.settings import HEADER_SIZE, TIMEOUT
+from framework.utils import RED
+
+NICS_LIST = []  # global list for save nic objects
+
+MIN_MTU = 68
+
+
+def nic_has_driver(func):
+    """
+    Check if the NIC has a driver.
+    """
+
+    @wraps(func)
+    def wrapper(*args, **kwargs):
+        nic_instance = args[0]
+        nic_instance.current_driver = nic_instance.get_nic_driver()
+        if not nic_instance.current_driver:
+            return ""
+        return func(*args, **kwargs)
+
+    return wrapper
+
+
+class NetDevice(object):
+
+    """
+    Abstract the device which is PF or VF.
+    """
+
+    def __init__(self, crb, domain_id, bus_id, devfun_id):
+        if not isinstance(crb, Crb):
+            raise Exception("  Please input the instance of Crb!!!")
+        self.crb = crb
+        self.domain_id = domain_id
+        self.bus_id = bus_id
+        self.devfun_id = devfun_id
+        self.pci = domain_id + ":" + bus_id + ":" + devfun_id
+        self.pci_id = get_pci_id(crb, domain_id, bus_id, devfun_id)
+        self.default_driver = settings.get_nic_driver(self.pci_id)
+        self.name = settings.get_nic_name(self.pci_id)
+
+        if self.nic_is_pf():
+            self.default_vf_driver = ""
+
+        self.intf_name = "N/A"
+        self.intf2_name = None
+        self.get_interface_name()
+        self.socket = self.get_nic_socket()
+        self.driver_version = ""
+        self.firmware = ""
+        self.pkg = None
+        self.current_driver = None
+
+    def stop(self):
+        pass
+
+    def close(self):
+        pass
+
+    def setup(self):
+        pass
+
+    def __send_expect(self, cmds, expected, timeout=TIMEOUT, alt_session=True):
+        """
+        Wrap the crb`s session as private session for sending expect.
+        """
+        return self.crb.send_expect(
+            cmds, expected, timeout=timeout, alt_session=alt_session
+        )
+
+    def __get_os_type(self):
+        """
+        Get OS type.
+        """
+        return self.crb.get_os_type()
+
+    def nic_is_pf(self):
+        """
+        It is the method that you can check if the nic is PF.
+        """
+        return True
+
+    def get_nic_driver(self):
+        """
+        Get the NIC driver.
+        """
+        return self.crb.get_pci_dev_driver(self.domain_id, self.bus_id, self.devfun_id)
+
+    def get_nic_pkg(self):
+        """
+        Get the NIC pkg.
+        """
+        self.pkg = {"type": "", "version": ""}
+        out = self.__send_expect('dmesg | grep "DDP package" | tail -1', "# ")
+        if "could not load" in out:
+            print(RED(out))
+            print(
+                RED("Warning: The loaded DDP package version may not as you expected")
+            )
+            try:
+                pkg_info = out.split(". ")[1].lower()
+                self.pkg["type"] = re.findall(".*package '(.*)'", pkg_info)[0].strip()
+                self.pkg["version"] = re.findall("version(.*)", pkg_info)[0].strip()
+            except:
+                print(RED("Warning: get pkg info failed"))
+        else:
+            pkg_info = out.split(": ")[-1].lower().split("package version")
+            if len(pkg_info) > 1:
+                self.pkg["type"] = pkg_info[0].strip()
+                self.pkg["version"] = pkg_info[1].strip()
+        return self.pkg
+
+    @nic_has_driver
+    def get_driver_firmware(self):
+        """
+        Get NIC driver and firmware version.
+        """
+        get_driver_firmware = getattr(
+            self, "get_driver_firmware_%s" % self.__get_os_type()
+        )
+        get_driver_firmware()
+
+    def get_driver_firmware_linux(self):
+        """
+        Get NIC driver and firmware version.
+        """
+        rexp = "version:\s.+"
+        pattern = re.compile(rexp)
+        out = self.__send_expect(
+            "ethtool -i {} | grep version".format(self.intf_name), "# "
+        )
+        driver_firmware = pattern.findall(out)
+        if len(driver_firmware) > 1:
+            self.driver_version = driver_firmware[0].split(": ")[-1].strip()
+            self.firmware = driver_firmware[1].split(": ")[-1].strip()
+
+        return self.driver_version, self.firmware
+
+    def get_driver_firmware_freebsd(self):
+        """
+        Get the NIC driver and firmware version.
+        """
+        NotImplemented
+
+    def get_nic_socket(self):
+        """
+        Get socket id of specified pci device.
+        """
+        get_nic_socket = getattr(self, "get_nic_socket_%s" % self.__get_os_type())
+        return get_nic_socket(self.domain_id, self.bus_id, self.devfun_id)
+
+    def get_nic_socket_linux(self, domain_id, bus_id, devfun_id):
+        command = "cat /sys/bus/pci/devices/%s\:%s\:%s/numa_node" % (
+            domain_id,
+            bus_id,
+            devfun_id,
+        )
+        try:
+            out = self.__send_expect(command, "# ")
+            socket = int(out)
+        except:
+            socket = -1
+        return socket
+
+    def get_nic_socket_freebsd(self, domain_id, bus_id, devfun_id):
+        NotImplemented
+
+    @nic_has_driver
+    def get_interface_name(self):
+        """
+        Get interface name of specified pci device.
+        Cal this function will update intf_name everytime
+        """
+        get_interface_name = getattr(
+            self, "get_interface_name_%s" % self.__get_os_type()
+        )
+        out = get_interface_name(
+            self.domain_id, self.bus_id, self.devfun_id, self.current_driver
+        )
+        if "No such file or directory" in out:
+            self.intf_name = "N/A"
+        else:
+            self.intf_name = out
+
+        # not a complete fix for CX3.
+        if len(out.split()) > 1 and self.default_driver == "mlx4_core":
+            self.intf_name = out.split()[0]
+            self.intf2_name = out.split()[1]
+
+        return self.intf_name
+
+    def get_interface2_name(self):
+        """
+        Get interface name of second port of this pci device.
+        """
+        return self.intf2_name
+
+    def get_interface_name_linux(self, domain_id, bus_id, devfun_id, driver):
+        """
+        Get interface name of specified pci device on linux.
+        """
+        driver_alias = driver.replace("-", "_")
+        try:
+            get_interface_name_linux = getattr(
+                self, "get_interface_name_linux_%s" % driver_alias
+            )
+        except Exception as e:
+            generic_driver = "generic"
+            get_interface_name_linux = getattr(
+                self, "get_interface_name_linux_%s" % generic_driver
+            )
+
+        return get_interface_name_linux(domain_id, bus_id, devfun_id)
+
+    def get_interface_name_linux_virtio_pci(self, domain_id, bus_id, devfun_id):
+        """
+        Get virtio device interface name by the default way on linux.
+        """
+        command = "ls --color=never /sys/bus/pci/devices/%s\:%s\:%s/virtio*/net" % (
+            domain_id,
+            bus_id,
+            devfun_id,
+        )
+        return self.__send_expect(command, "# ")
+
+    def get_interface_name_linux_generic(self, domain_id, bus_id, devfun_id):
+        """
+        Get the interface name by the default way on linux.
+        """
+        command = "ls --color=never /sys/bus/pci/devices/%s\:%s\:%s/net" % (
+            domain_id,
+            bus_id,
+            devfun_id,
+        )
+        return self.__send_expect(command, "# ")
+
+    def get_interface_name_freebsd(self, domain_id, bus_id, devfun_id, driver):
+        """
+        Get interface name of specified pci device on Freebsd.
+        """
+        try:
+            get_interface_name_freebsd = getattr(
+                self, "get_interface_name_freebsd_%s" % driver
+            )
+        except Exception as e:
+            generic_driver = "generic"
+            get_interface_name_freebsd = getattr(
+                self, "get_interface_name_freebsd_%s" % generic_driver
+            )
+
+        return get_interface_name_freebsd(domain_id, bus_id, devfun_id)
+
+    def get_interface_name_freebsd_generic(self, domain_id, bus_id, devfun_id):
+        """
+        Get the interface name by the default way on freebsd.
+        """
+        pci_str = "%s:%s:%s" % (domain_id, bus_id, devfun_id)
+        out = self.__send_expect("pciconf -l", "# ")
+        rexp = r"(\w*)@pci0:%s" % pci_str
+        pattern = re.compile(rexp)
+        match = pattern.findall(out)
+        if len(match) == 0:
+            return "No such file"
+        return match[0]
+
+    @nic_has_driver
+    def set_vf_mac_addr(self, vf_idx=0, mac="00:00:00:00:00:01"):
+        """
+        Set mac address of specified vf device.
+        """
+        set_vf_mac_addr = getattr(self, "set_vf_mac_addr_%s" % self.__get_os_type())
+        out = set_vf_mac_addr(self.intf_name, vf_idx, mac)
+
+    def set_vf_mac_addr_linux(self, intf, vf_idx, mac):
+        """
+        Set mac address of specified vf device on linux.
+        """
+        if self.current_driver != self.default_driver:
+            print("Only support when PF bound to default driver")
+            return
+
+        self.__send_expect("ip link set %s vf %d mac %s" % (intf, vf_idx, mac), "# ")
+
+    @nic_has_driver
+    def get_mac_addr(self):
+        """
+        Get mac address of specified pci device.
+        """
+        get_mac_addr = getattr(self, "get_mac_addr_%s" % self.__get_os_type())
+        out = get_mac_addr(
+            self.intf_name,
+            self.domain_id,
+            self.bus_id,
+            self.devfun_id,
+            self.current_driver,
+        )
+        if "No such file or directory" in out:
+            return "N/A"
+        else:
+            return out
+
+    @nic_has_driver
+    def get_intf2_mac_addr(self):
+        """
+        Get mac address of 2nd port of specified pci device.
+        """
+        get_mac_addr = getattr(self, "get_mac_addr_%s" % self.__get_os_type())
+        out = get_mac_addr(
+            self.get_interface2_name(),
+            self.domain_id,
+            self.bus_id,
+            self.devfun_id,
+            self.current_driver,
+        )
+        if "No such file or directory" in out:
+            return "N/A"
+        else:
+            return out
+
+    def get_mac_addr_linux(self, intf, domain_id, bus_id, devfun_id, driver):
+        """
+        Get mac address of specified pci device on linux.
+        """
+        driver_alias = driver.replace("-", "_")
+        try:
+            get_mac_addr_linux = getattr(self, "get_mac_addr_linux_%s" % driver_alias)
+        except Exception as e:
+            generic_driver = "generic"
+            get_mac_addr_linux = getattr(self, "get_mac_addr_linux_%s" % generic_driver)
+
+        return get_mac_addr_linux(intf, domain_id, bus_id, devfun_id, driver)
+
+    def get_mac_addr_linux_generic(self, intf, domain_id, bus_id, devfun_id, driver):
+        """
+        Get MAC by the default way on linux.
+        """
+        command = "cat /sys/bus/pci/devices/%s\:%s\:%s/net/%s/address" % (
+            domain_id,
+            bus_id,
+            devfun_id,
+            intf,
+        )
+        return self.__send_expect(command, "# ")
+
+    def get_mac_addr_linux_virtio_pci(self, intf, domain_id, bus_id, devfun_id, driver):
+        """
+        Get MAC by the default way on linux.
+        """
+        virtio_cmd = (
+            "ls /sys/bus/pci/devices/%s\:%s\:%s/ | grep --color=never virtio"
+            % (domain_id, bus_id, devfun_id)
+        )
+        virtio = self.__send_expect(virtio_cmd, "# ")
+
+        command = "cat /sys/bus/pci/devices/%s\:%s\:%s/%s/net/%s/address" % (
+            domain_id,
+            bus_id,
+            devfun_id,
+            virtio,
+            intf,
+        )
+        return self.__send_expect(command, "# ")
+
+    def get_mac_addr_freebsd(self, intf, domain_id, bus_id, devfun_id, driver):
+        """
+        Get mac address of specified pci device on Freebsd.
+        """
+        try:
+            get_mac_addr_freebsd = getattr(self, "get_mac_addr_freebsd_%s" % driver)
+        except Exception as e:
+            generic_driver = "generic"
+            get_mac_addr_freebsd = getattr(
+                self, "get_mac_addr_freebsd_%s" % generic_driver
+            )
+
+        return get_mac_addr_freebsd(intf, domain_id, bus_id, devfun_id)
+
+    def get_mac_addr_freebsd_generic(self, intf, domain_id, bus_id, devfun_id):
+        """
+        Get the MAC by the default way on Freebsd.
+        """
+        out = self.__send_expect("ifconfig %s" % intf, "# ")
+        rexp = r"ether ([\da-f:]*)"
+        pattern = re.compile(rexp)
+        match = pattern.findall(out)
+        return match[0]
+
+    @nic_has_driver
+    def get_ipv4_addr(self):
+        """
+        Get ipv4 address of specified pci device.
+        """
+        get_ipv4_addr = getattr(self, "get_ipv4_addr_%s" % self.__get_os_type())
+        return get_ipv4_addr(self.intf_name, self.current_driver)
+
+    def get_ipv4_addr_linux(self, intf, driver):
+        """
+        Get ipv4 address of specified pci device on linux.
+        """
+        try:
+            get_ipv4_addr_linux = getattr(self, "get_ipv4_addr_linux_%s" % driver)
+        except Exception as e:
+            generic_driver = "generic"
+            get_ipv4_addr_linux = getattr(
+                self, "get_ipv4_addr_linux_%s" % generic_driver
+            )
+
+        return get_ipv4_addr_linux(intf)
+
+    def get_ipv4_addr_linux_generic(self, intf):
+        """
+        Get IPv4 address by the default way on linux.
+        """
+        out = self.__send_expect(
+            "ip -family inet address show dev %s | awk '/inet/ { print $2 }'" % intf,
+            "# ",
+        )
+        return out.split("/")[0]
+
+    def get_ipv4_addr_freebsd(self, intf, driver):
+        """
+        Get ipv4 address of specified pci device on Freebsd.
+        """
+        try:
+            get_ipv4_addr_freebsd = getattr(self, "get_ipv4_addr_freebsd_%s" % driver)
+        except Exception as e:
+            generic_driver = "generic"
+            get_ipv4_addr_freebsd = getattr(
+                self, "get_ipv4_addr_freebsd_%s" % generic_driver
+            )
+
+        return get_ipv4_addr_freebsd(intf)
+
+    def get_ipv4_addr_freebsd_generic(self, intf):
+        """
+        Get the IPv4 address by the default way on Freebsd.
+        """
+        out = self.__send_expect("ifconfig %s" % intf, "# ")
+        rexp = r"inet ([\d:]*)%"
+        pattern = re.compile(rexp)
+        match = pattern.findall(out)
+        if len(match) == 0:
+            return None
+
+        return match[0]
+
+    @nic_has_driver
+    def enable_ipv6(self):
+        """
+        Enable ipv6 address of specified pci device.
+        """
+        if self.current_driver != self.default_driver:
+            return
+
+        enable_ipv6 = getattr(self, "enable_ipv6_%s" % self.__get_os_type())
+        return enable_ipv6(self.intf_name)
+
+    def enable_ipv6_linux(self, intf):
+        """
+        Enable ipv6 address of specified pci device on linux.
+        """
+        self.__send_expect("sysctl net.ipv6.conf.%s.disable_ipv6=0" % intf, "# ")
+        # FVL interface need down and up for re-enable ipv6
+        if self.default_driver == "i40e":
+            self.__send_expect("ifconfig %s down" % intf, "# ")
+            self.__send_expect("ifconfig %s up" % intf, "# ")
+
+    def enable_ipv6_freebsd(self, intf):
+        self.__send_expect("sysctl net.ipv6.conf.%s.disable_ipv6=0" % intf, "# ")
+        self.__send_expect("ifconfig %s down" % intf, "# ")
+        self.__send_expect("ifconfig %s up" % intf, "# ")
+
+    @nic_has_driver
+    def disable_ipv6(self):
+        """
+        Disable ipv6 address of specified pci device.
+        """
+        if self.current_driver != self.default_driver:
+            return
+        disable_ipv6 = getattr(self, "disable_ipv6_%s" % self.__get_os_type())
+        return disable_ipv6(self.intf_name)
+
+    def disable_ipv6_linux(self, intf):
+        """
+        Disable ipv6 address of specified pci device on linux.
+        """
+        self.__send_expect("sysctl net.ipv6.conf.%s.disable_ipv6=1" % intf, "# ")
+
+    def disable_ipv6_freebsd(self, intf):
+        self.__send_expect("sysctl net.ipv6.conf.%s.disable_ipv6=1" % intf, "# ")
+        self.__send_expect("ifconfig %s down" % intf, "# ")
+        self.__send_expect("ifconfig %s up" % intf, "# ")
+
+    @nic_has_driver
+    def get_ipv6_addr(self):
+        """
+        Get ipv6 address of specified pci device.
+        """
+        get_ipv6_addr = getattr(self, "get_ipv6_addr_%s" % self.__get_os_type())
+        return get_ipv6_addr(self.intf_name, self.current_driver)
+
+    def get_ipv6_addr_linux(self, intf, driver):
+        """
+        Get ipv6 address of specified pci device on linux.
+        """
+        try:
+            get_ipv6_addr_linux = getattr(self, "get_ipv6_addr_linux_%s" % driver)
+        except Exception as e:
+            generic_driver = "generic"
+            get_ipv6_addr_linux = getattr(
+                self, "get_ipv6_addr_linux_%s" % generic_driver
+            )
+
+        return get_ipv6_addr_linux(intf)
+
+    def get_ipv6_addr_linux_generic(self, intf):
+        """
+        Get the IPv6 address by the default way on linux.
+        """
+        out = self.__send_expect(
+            "ip -family inet6 address show dev %s | awk '/inet6/ { print $2 }'" % intf,
+            "# ",
+        )
+        return out.split("/")[0]
+
+    def get_ipv6_addr_freebsd(self, intf, driver):
+        """
+        Get ipv6 address of specified pci device on Freebsd.
+        """
+        try:
+            get_ipv6_addr_freebsd = getattr(self, "get_ipv6_addr_freebsd_%s" % driver)
+        except Exception as e:
+            generic_driver = "generic"
+            get_ipv6_addr_freebsd = getattr(
+                self, "get_ipv6_addr_freebsd_%s" % generic_driver
+            )
+
+        return get_ipv6_addr_freebsd(intf)
+
+    def get_ipv6_addr_freebsd_generic(self, intf):
+        """
+        Get the IPv6 address by the default way on Freebsd.
+        """
+        out = self.__send_expect("ifconfig %s" % intf, "# ")
+        rexp = r"inet6 ([\da-f:]*)%"
+        pattern = re.compile(rexp)
+        match = pattern.findall(out)
+        if len(match) == 0:
+            return None
+
+        return match[0]
+
+    def get_nic_numa(self):
+        """
+        Get numa number of specified pci device.
+        """
+        self.crb.get_device_numa(self.domain_id, self.bus_id, self.devfun_id)
+
+    def get_card_type(self):
+        """
+        Get card type of specified pci device.
+        """
+        return self.crb.get_pci_dev_id(self.domain_id, self.bus_id, self.devfun_id)
+
+    @nic_has_driver
+    def get_nic_speed(self):
+        """
+        Get the speed of specified pci device.
+        """
+        get_nic_speed = getattr(self, "get_nic_speed_%s" % self.__get_os_type())
+        return get_nic_speed(self.domain_id, self.bus_id, self.devfun_id)
+
+    def get_nic_speed_linux(self, domain_id, bus_id, devfun_id):
+        command = "cat /sys/bus/pci/devices/%s\:%s\:%s/net/*/speed" % (
+            domain_id,
+            bus_id,
+            devfun_id,
+        )
+        nic_speed = self.__send_expect(command, "# ")
+        return nic_speed
+
+    def get_nic_speed_freebsd(self, domain_id, bus_id, devfun_id):
+        NotImplemented
+
+    @nic_has_driver
+    def get_sriov_vfs_pci(self):
+        """
+        Get all SRIOV VF pci bus of specified pci device.
+        """
+        get_sriov_vfs_pci = getattr(self, "get_sriov_vfs_pci_%s" % self.__get_os_type())
+        return get_sriov_vfs_pci(
+            self.domain_id, self.bus_id, self.devfun_id, self.current_driver
+        )
+
+    def get_sriov_vfs_pci_freebsd(self, domain_id, bus_id, devfun_id, driver):
+        """
+        FreeBSD not support virtualization cases now.
+        We can implement it later.
+        """
+        pass
+
+    def get_sriov_vfs_pci_linux(self, domain_id, bus_id, devfun_id, driver):
+        """
+        Get all SRIOV VF pci bus of specified pci device on linux.
+        """
+        try:
+            get_sriov_vfs_pci_linux = getattr(
+                self, "get_sriov_vfs_pci_linux_%s" % driver
+            )
+        except Exception as e:
+            generic_driver = "generic"
+            get_sriov_vfs_pci_linux = getattr(
+                self, "get_sriov_vfs_pci_linux_%s" % generic_driver
+            )
+
+        return get_sriov_vfs_pci_linux(domain_id, bus_id, devfun_id)
+
+    def get_sriov_vfs_pci_linux_generic(self, domain_id, bus_id, devfun_id):
+        """
+        Get all the VF PCIs of specified PF by the default way on linux.
+        """
+        sriov_numvfs = self.__send_expect(
+            "cat /sys/bus/pci/devices/%s\:%s\:%s/sriov_numvfs"
+            % (domain_id, bus_id, devfun_id),
+            "# ",
+        )
+        sriov_vfs_pci = []
+
+        if "No such file" in sriov_numvfs:
+            return sriov_vfs_pci
+
+        if int(sriov_numvfs) == 0:
+            pass
+        else:
+            try:
+                virtfns = self.__send_expect(
+                    "ls --color=never -d /sys/bus/pci/devices/%s\:%s\:%s/virtfn*"
+                    % (domain_id, bus_id, devfun_id),
+                    "# ",
+                )
+                for virtfn in virtfns.split():
+                    vf_uevent = self.__send_expect(
+                        "cat %s" % os.path.join(virtfn, "uevent"), "# "
+                    )
+                    vf_pci = re.search(
+                        r"PCI_SLOT_NAME=(%s+:[0-9a-f]+:[0-9a-f]+\.[0-9a-f]+)"
+                        % domain_id,
+                        vf_uevent,
+                    ).group(1)
+                    sriov_vfs_pci.append(vf_pci)
+            except Exception as e:
+                print(
+                    "Scan linux port [%s:%s.%s] sriov vf failed: %s"
+                    % (domain_id, bus_id, devfun_id, e)
+                )
+
+        return sriov_vfs_pci
+
+    @nic_has_driver
+    def generate_sriov_vfs(self, vf_num):
+        """
+        Generate some numbers of SRIOV VF.
+        """
+        if vf_num == 0:
+            self.bind_vf_driver()
+        generate_sriov_vfs = getattr(
+            self, "generate_sriov_vfs_%s" % self.__get_os_type()
+        )
+        generate_sriov_vfs(
+            self.domain_id, self.bus_id, self.devfun_id, vf_num, self.current_driver
+        )
+        if vf_num != 0:
+            self.sriov_vfs_pci = self.get_sriov_vfs_pci()
+
+            vf_pci = self.sriov_vfs_pci[0]
+            addr_array = vf_pci.split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+
+            self.default_vf_driver = self.crb.get_pci_dev_driver(
+                domain_id, bus_id, devfun_id
+            )
+        else:
+            self.sriov_vfs_pci = []
+        time.sleep(1)
+
+    def generate_sriov_vfs_linux(self, domain_id, bus_id, devfun_id, vf_num, driver):
+        """
+        Generate some numbers of SRIOV VF.
+        """
+        try:
+            generate_sriov_vfs_linux = getattr(
+                self, "generate_sriov_vfs_linux_%s" % driver
+            )
+        except Exception as e:
+            generic_driver = "generic"
+            generate_sriov_vfs_linux = getattr(
+                self, "generate_sriov_vfs_linux_%s" % generic_driver
+            )
+
+        return generate_sriov_vfs_linux(domain_id, bus_id, devfun_id, vf_num)
+
+    def generate_sriov_vfs_linux_generic(self, domain_id, bus_id, devfun_id, vf_num):
+        """
+        Generate SRIOV VFs by the default way on linux.
+        """
+        nic_driver = self.get_nic_driver()
+
+        if not nic_driver:
+            return None
+
+        vf_reg_file = "sriov_numvfs"
+        vf_reg_path = os.path.join(
+            "/sys/bus/pci/devices/%s:%s:%s" % (domain_id, bus_id, devfun_id),
+            vf_reg_file,
+        )
+        self.__send_expect("echo %d > %s" % (int(vf_num), vf_reg_path), "# ")
+
+    def generate_sriov_vfs_linux_igb_uio(self, domain_id, bus_id, devfun_id, vf_num):
+        """
+        Generate SRIOV VFs by the special way of igb_uio driver on linux.
+        """
+        nic_driver = self.get_nic_driver()
+
+        if not nic_driver:
+            return None
+
+        vf_reg_file = "max_vfs"
+        if self.default_driver == "i40e":
+            regx_reg_path = "find /sys -name %s | grep %s:%s:%s" % (
+                vf_reg_file,
+                domain_id,
+                bus_id,
+                devfun_id,
+            )
+            vf_reg_path = self.__send_expect(regx_reg_path, "# ")
+        else:
+            vf_reg_path = os.path.join(
+                "/sys/bus/pci/devices/%s:%s:%s" % (domain_id, bus_id, devfun_id),
+                vf_reg_file,
+            )
+        self.__send_expect("echo %d > %s" % (int(vf_num), vf_reg_path), "# ")
+
+    def destroy_sriov_vfs(self):
+        """
+        Destroy the SRIOV VFs.
+        """
+        self.generate_sriov_vfs(0)
+
+    def bind_vf_driver(self, pci="", driver=""):
+        """
+        Bind the specified driver to VF.
+        """
+        bind_vf_driver = getattr(self, "bind_driver_%s" % self.__get_os_type())
+        if not driver:
+            if not self.default_vf_driver:
+                print("Must specify a driver because default VF driver is NULL!")
+                return
+            driver = self.default_vf_driver
+
+        if not pci:
+            if not self.sriov_vfs_pci:
+                print("No VFs on the nic [%s]!" % self.pci)
+                return
+            for vf_pci in self.sriov_vfs_pci:
+                addr_array = vf_pci.split(":")
+                domain_id = addr_array[0]
+                bus_id = addr_array[1]
+                devfun_id = addr_array[2]
+
+                bind_vf_driver(domain_id, bus_id, devfun_id, driver)
+        else:
+            addr_array = pci.split(":")
+            domain_id = addr_array[0]
+            bus_id = addr_array[1]
+            devfun_id = addr_array[2]
+
+            bind_vf_driver(domain_id, bus_id, devfun_id, driver)
+
+    def bind_driver(self, driver=""):
+        """
+        Bind specified driver to PF.
+        """
+        bind_driver = getattr(self, "bind_driver_%s" % self.__get_os_type())
+        if not driver:
+            if not self.default_driver:
+                print("Must specify a driver because default driver is NULL!")
+                return
+            driver = self.default_driver
+        ret = bind_driver(self.domain_id, self.bus_id, self.devfun_id, driver)
+        time.sleep(1)
+        return ret
+
+    def bind_driver_linux(self, domain_id, bus_id, devfun_id, driver):
+        """
+        Bind NIC port to specified driver on linux.
+        """
+        driver_alias = driver.replace("-", "_")
+        try:
+            bind_driver_linux = getattr(self, "bind_driver_linux_%s" % driver_alias)
+            return bind_driver_linux(domain_id, bus_id, devfun_id)
+        except Exception as e:
+            driver_alias = "generic"
+            bind_driver_linux = getattr(self, "bind_driver_linux_%s" % driver_alias)
+            return bind_driver_linux(domain_id, bus_id, devfun_id, driver)
+
+    def bind_driver_linux_generic(self, domain_id, bus_id, devfun_id, driver):
+        """
+        Bind NIC port to specified driver by the default way on linux.
+        """
+        new_id = self.pci_id.replace(":", " ")
+        nic_pci_num = ":".join([domain_id, bus_id, devfun_id])
+        self.__send_expect(
+            "echo %s > /sys/bus/pci/drivers/%s/new_id" % (new_id, driver), "# "
+        )
+        self.__send_expect(
+            "echo %s > /sys/bus/pci/devices/%s\:%s\:%s/driver/unbind"
+            % (nic_pci_num, domain_id, bus_id, devfun_id),
+            "# ",
+        )
+        self.__send_expect(
+            "echo %s > /sys/bus/pci/drivers/%s/bind" % (nic_pci_num, driver), "# "
+        )
+        if driver == self.default_driver:
+            itf = self.get_interface_name()
+            self.__send_expect("ifconfig %s up" % itf, "# ")
+            if self.get_interface2_name():
+                itf = self.get_interface2_name()
+                self.__send_expect("ifconfig %s up" % itf, "# ")
+
+    def bind_driver_linux_pci_stub(self, domain_id, bus_id, devfun_id):
+        """
+        Bind NIC port to the pci-stub driver on linux.
+        """
+        new_id = self.pci_id.replace(":", " ")
+        nic_pci_num = ":".join([domain_id, bus_id, devfun_id])
+        self.__send_expect(
+            "echo %s > /sys/bus/pci/drivers/pci-stub/new_id" % new_id, "# "
+        )
+        self.__send_expect(
+            "echo %s > /sys/bus/pci/devices/%s\:%s\:%s/driver/unbind"
+            % (nic_pci_num, domain_id, bus_id, devfun_id),
+            "# ",
+        )
+        self.__send_expect(
+            "echo %s > /sys/bus/pci/drivers/pci-stub/bind" % nic_pci_num, "# "
+        )
+
+    @nic_has_driver
+    def unbind_driver(self, driver=""):
+        """
+        Unbind driver.
+        """
+        unbind_driver = getattr(self, "unbind_driver_%s" % self.__get_os_type())
+        if not driver:
+            driver = "generic"
+        ret = unbind_driver(self.domain_id, self.bus_id, self.devfun_id, driver)
+        time.sleep(1)
+        return ret
+
+    def unbind_driver_linux(self, domain_id, bus_id, devfun_id, driver):
+        """
+        Unbind driver on linux.
+        """
+        driver_alias = driver.replace("-", "_")
+
+        unbind_driver_linux = getattr(self, "unbind_driver_linux_%s" % driver_alias)
+        return unbind_driver_linux(domain_id, bus_id, devfun_id)
+
+    def unbind_driver_linux_generic(self, domain_id, bus_id, devfun_id):
+        """
+        Unbind driver by the default way on linux.
+        """
+        nic_pci_num = ":".join([domain_id, bus_id, devfun_id])
+        cmd = "echo %s > /sys/bus/pci/devices/%s\:%s\:%s/driver/unbind"
+        self.__send_expect(cmd % (nic_pci_num, domain_id, bus_id, devfun_id), "# ")
+
+    def _cal_mtu(self, framesize):
+        return framesize - HEADER_SIZE["eth"]
+
+    def enable_jumbo(self, framesize=0):
+        if self.intf_name == "N/A":
+            print(RED("Enable jumbo must based on kernel interface!!!"))
+            return
+        if framesize < MIN_MTU:
+            print(RED("Enable jumbo must over %d !!!" % MIN_MTU))
+            return
+
+        mtu = self._cal_mtu(framesize)
+        cmd = "ifconfig %s mtu %d"
+        self.__send_expect(cmd % (self.intf_name, mtu), "# ")
+
+
+def get_pci_id(crb, domain_id, bus_id, devfun_id):
+    """
+    Return pci device type
+    """
+    command = "cat /sys/bus/pci/devices/%s\:%s\:%s/vendor" % (
+        domain_id,
+        bus_id,
+        devfun_id,
+    )
+    out = crb.send_expect(command, "# ")
+    vendor = out[2:]
+    command = "cat /sys/bus/pci/devices/%s\:%s\:%s/device" % (
+        domain_id,
+        bus_id,
+        devfun_id,
+    )
+    out = crb.send_expect(command, "# ")
+    device = out[2:]
+    return "%s:%s" % (vendor, device)
+
+
+def add_to_list(host, obj):
+    """
+    Add network device object to global structure
+    Parameter 'host' is ip address, 'obj' is netdevice object
+    """
+    nic = {}
+    nic["host"] = host
+    nic["pci"] = obj.pci
+    nic["port"] = obj
+    NICS_LIST.append(nic)
+
+
+def get_from_list(host, domain_id, bus_id, devfun_id):
+    """
+    Get network device object from global structure
+    Parameter will by host ip, pci domain id, pci bus id, pci function id
+    """
+    for nic in NICS_LIST:
+        if host == nic["host"]:
+            pci = ":".join((domain_id, bus_id, devfun_id))
+            if pci == nic["pci"] and nic["port"].crb.session:
+                return nic["port"]
+    return None
+
+
+def remove_from_list(host):
+    """
+    Remove network device object from global structure
+    Parameter will by host ip
+    """
+    for nic in NICS_LIST[:]:
+        if host == nic["host"]:
+            NICS_LIST.remove(nic)
+
+
+def GetNicObj(crb, domain_id, bus_id, devfun_id):
+    """
+    Get network device object. If network device has been initialized, just
+    return object.
+    """
+    # find existed NetDevice object
+    obj = get_from_list(crb.crb["My IP"], domain_id, bus_id, devfun_id)
+    if obj:
+        return obj
+
+    # generate NetDevice object
+    obj = NetDevice(crb, domain_id, bus_id, devfun_id)
+
+    # save NetDevice object to cache, directly get it from cache next time
+    add_to_list(crb.crb["My IP"], obj)
+    return obj
+
+
+def RemoveNicObj(crb):
+    """
+    Remove network device object.
+    """
+    remove_from_list(crb.crb["My IP"])
-- 
2.20.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2022-04-06 15:07 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06 15:04 [RFC PATCH v1 00/18] merge DTS component files to DPDK Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 01/18] dts: merge DTS framework/crb.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 02/18] dts: merge DTS framework/dut.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 03/18] dts: merge DTS framework/ixia_buffer_parser.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 04/18] dts: merge DTS framework/pktgen.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 05/18] dts: merge DTS framework/pktgen_base.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 06/18] dts: merge DTS framework/pktgen_ixia.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 07/18] dts: merge DTS framework/pktgen_ixia_network.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 08/18] dts: merge DTS framework/pktgen_trex.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 09/18] dts: merge DTS framework/ssh_connection.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 10/18] dts: merge DTS framework/ssh_pexpect.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 11/18] dts: merge DTS framework/tester.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 12/18] dts: merge DTS framework/ixia_network/__init__.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 13/18] dts: merge DTS framework/ixia_network/ixnet.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 14/18] dts: merge DTS framework/ixia_network/ixnet_config.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 15/18] dts: merge DTS framework/ixia_network/ixnet_stream.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 16/18] dts: merge DTS framework/ixia_network/packet_parser.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 17/18] dts: merge DTS nics/__init__.py " Juraj Linkeš
2022-04-06 15:04 ` [RFC PATCH v1 18/18] dts: merge DTS nics/net_device.py " Juraj Linkeš

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).