test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V5 1/2] tests/multiprocess_iavf: add vf multiprocess test case
@ 2022-05-25  8:52 Jiale Song
  2022-05-25  8:52 ` [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: " Jiale Song
  0 siblings, 1 reply; 4+ messages in thread
From: Jiale Song @ 2022-05-25  8:52 UTC (permalink / raw)
  To: dts; +Cc: Jiale Song

Signed-off-by: Jiale Song <songx.jiale@intel.com>
---
 tests/TestSuite_multiprocess_iavf.py | 1946 ++++++++++++++++++++++++++
 1 file changed, 1946 insertions(+)
 create mode 100644 tests/TestSuite_multiprocess_iavf.py

diff --git a/tests/TestSuite_multiprocess_iavf.py b/tests/TestSuite_multiprocess_iavf.py
new file mode 100644
index 00000000..2e301367
--- /dev/null
+++ b/tests/TestSuite_multiprocess_iavf.py
@@ -0,0 +1,1946 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""
+DPDK Test suite.
+Multi-process Test.
+"""
+
+import copy
+import os
+import random
+import re
+import time
+import traceback
+from collections import OrderedDict
+
+import framework.utils as utils
+from framework.exception import VerifyFailure
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.test_case import TestCase, check_supported_nic
+from framework.utils import GREEN, RED
+
+from .rte_flow_common import FdirProcessing as fdirprocess
+from .rte_flow_common import RssProcessing as rssprocess
+
+executions = []
+
+
+class TestMultiprocess(TestCase):
+
+    support_nic = ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"]
+
+    def set_up_all(self):
+        """
+        Run at the start of each test suite.
+
+        Multiprocess prerequisites.
+        Requirements:
+            OS is not freeBSD
+            DUT core number >= 4
+            multi_process build pass
+        """
+        # self.verify('bsdapp' not in self.target, "Multiprocess not support freebsd")
+
+        self.verify(len(self.dut.get_all_cores()) >= 4, "Not enough Cores")
+        self.dut_ports = self.dut.get_ports()
+        self.pkt = Packet()
+        self.socket = self.dut.get_numa_id(self.dut_ports[0])
+        extra_option = "-Dexamples='multi_process/client_server_mp/mp_server,multi_process/client_server_mp/mp_client,multi_process/simple_mp,multi_process/symmetric_mp'"
+        self.dut.build_install_dpdk(target=self.target, extra_options=extra_option)
+        self.app_mp_client = self.dut.apps_name["mp_client"]
+        self.app_mp_server = self.dut.apps_name["mp_server"]
+        self.app_simple_mp = self.dut.apps_name["simple_mp"]
+        self.app_symmetric_mp = self.dut.apps_name["symmetric_mp"]
+
+        executions.append({"nprocs": 1, "cores": "1S/1C/1T", "pps": 0})
+        executions.append({"nprocs": 2, "cores": "1S/1C/2T", "pps": 0})
+        executions.append({"nprocs": 2, "cores": "1S/2C/1T", "pps": 0})
+        executions.append({"nprocs": 4, "cores": "1S/2C/2T", "pps": 0})
+        executions.append({"nprocs": 4, "cores": "1S/4C/1T", "pps": 0})
+        executions.append({"nprocs": 8, "cores": "1S/4C/2T", "pps": 0})
+
+        self.eal_param = ""
+        for i in self.dut_ports:
+            self.eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
+
+        self.eal_para = self.dut.create_eal_parameters(cores="1S/2C/1T")
+        # start new session to run secondary
+        self.session_secondary = self.dut.new_session()
+
+        # get dts output path
+        if self.logger.log_path.startswith(os.sep):
+            self.output_path = self.logger.log_path
+        else:
+            cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+            self.output_path = os.sep.join([cur_path, self.logger.log_path])
+        # create an instance to set stream field setting
+        self.pktgen_helper = PacketGeneratorHelper()
+        self.dport_info0 = self.dut.ports_info[self.dut_ports[0]]
+        self.pci0 = self.dport_info0["pci"]
+        self.tester_ifaces = [
+            self.tester.get_interface(self.dut.ports_map[port])
+            for port in self.dut_ports
+        ]
+        self.rxq = 1
+        self.session_list = []
+        self.logfmt = "*" * 20
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        pass
+
+    def launch_multi_testpmd(self, proc_type, queue_num, process_num, **kwargs):
+        self.session_list = [
+            self.dut.new_session("process_{}".format(i)) for i in range(process_num)
+        ]
+        self.pmd_output_list = [
+            PmdOutput(self.dut, self.session_list[i]) for i in range(process_num)
+        ]
+        self.dut.init_reserved_core()
+        proc_type_list = []
+        self.out_list = []
+        if isinstance(proc_type, list):
+            proc_type_list = copy.deepcopy(proc_type)
+            proc_type = proc_type_list[0]
+        for i in range(process_num):
+            cores = self.dut.get_reserved_core("2C", socket=0)
+            if i != 0 and proc_type_list:
+                proc_type = proc_type_list[1]
+            eal_param = "--proc-type={} -a {} --log-level=ice,7".format(
+                proc_type, self.pci0
+            )
+            param = "--rxq={0} --txq={0} --num-procs={1} --proc-id={2}".format(
+                queue_num, process_num, i
+            )
+            if kwargs.get("options") is not None:
+                param = "".join([param, kwargs.get("options")])
+            out = self.pmd_output_list[i].start_testpmd(
+                cores=cores,
+                eal_param=eal_param,
+                param=param,
+                timeout=kwargs.get("timeout", 20),
+            )
+            self.out_list.append(out)
+            self.pmd_output_list[i].execute_cmd("set fwd rxonly")
+            self.pmd_output_list[i].execute_cmd("set verbose 1")
+            self.pmd_output_list[i].execute_cmd("start")
+            self.pmd_output_list[i].execute_cmd("clear port stats all")
+
+    def get_pkt_statistic_process(self, out, **kwargs):
+        """
+        :param out: information received by testpmd process after sending packets and port statistics
+        :return: forward statistic dict, eg: {'rx-packets':1, 'tx-packets:0, 'tx-dropped':1}
+        """
+        p = re.compile(
+            r"Forward\s+Stats\s+for\s+RX\s+Port=\s+{}/Queue=([\s\d+]\d+)\s+.*\n.*RX-packets:\s+(\d+)\s+TX-packets:\s+(\d+)\s+TX-dropped:\s+(\d+)\s".format(
+                kwargs.get("port_id")
+            )
+        )
+        item_name = ["rx-packets", "tx-packets", "tx-dropped"]
+        statistic = p.findall(out)
+        if statistic:
+            rx_pkt_total, tx_pkt_total, tx_drop_total = 0, 0, 0
+            queue_set = set()
+            for item in statistic:
+                queue, rx_pkt, tx_pkt, tx_drop = map(int, item)
+                queue_set.add(queue)
+                rx_pkt_total += rx_pkt
+                tx_pkt_total += tx_pkt
+                tx_drop_total += tx_drop
+            static_dict = {
+                k: v
+                for k, v in zip(item_name, [rx_pkt_total, tx_pkt_total, tx_drop_total])
+            }
+            static_dict["queue"] = queue_set
+            return static_dict
+        else:
+            raise Exception("got wrong output, not match pattern {}".format(p.pattern))
+
+    def random_packet(self, pkt_num):
+        pkt = Packet()
+        pkt.generate_random_pkts(
+            pktnum=pkt_num,
+            dstmac="00:11:22:33:44:55",
+        )
+        pkt.send_pkt(crb=self.tester, tx_port=self.tester_ifaces[0], count=1)
+
+    def specify_packet(self, que_num):
+        # create rule to set queue as one of each process queues
+        rule_str = "flow create 0 ingress pattern eth / ipv4 src is 192.168.{0}.3  / end actions queue index {0} / end"
+        rules = [rule_str.format(i) for i in range(que_num)]
+        fdirprocess(
+            self, self.pmd_output_list[0], self.tester_ifaces, rxq=que_num
+        ).create_rule(rules)
+        # send 1 packet for each queue,the number of packets should be received by each process is (queue_num/proc_num)
+        pkt = Packet()
+        pkt_num = que_num
+        self.logger.info("packet num:{}".format(pkt_num))
+        packets = [
+            'Ether(dst="00:11:22:33:44:55") / IP(src="192.168.{0}.3", dst="192.168.0.21") / Raw("x" * 80)'.format(
+                i
+            )
+            for i in range(pkt_num)
+        ]
+        pkt.update_pkt(packets)
+        pkt.send_pkt(crb=self.tester, tx_port=self.tester_ifaces[0], count=1)
+
+    def _multiprocess_data_pass(self, case):
+        que_num, proc_num = case.get("queue_num"), case.get("proc_num")
+        pkt_num = case.setdefault("pkt_num", que_num)
+        step = int(que_num / proc_num)
+        proc_queue = [set(range(i, i + step)) for i in range(0, que_num, step)]
+        queue_dict = {
+            k: v
+            for k, v in zip(
+                ["process_{}".format(i) for i in range(que_num)], proc_queue
+            )
+        }
+        # start testpmd multi-process
+        self.launch_multi_testpmd(
+            proc_type=case.get("proc_type"), queue_num=que_num, process_num=proc_num
+        )
+        # send random or specify packets
+        packet_func = getattr(self, case.get("packet_type") + "_packet")
+        packet_func(pkt_num)
+        # get output for each process
+        process_static = {}
+        for i in range(len(self.pmd_output_list)):
+            out = self.pmd_output_list[i].execute_cmd("stop")
+            static = self.get_pkt_statistic_process(out, port_id=0)
+            process_static["process_{}".format(i)] = static
+        self.logger.info("process output static:{}".format(process_static))
+        # check whether each process receives packet, and ecah process receives packets with the corresponding queue
+        for k, v in process_static.items():
+            self.verify(
+                v.get("rx-packets") > 0,
+                "fail:process:{} does not receive packet".format(k),
+            )
+            self.verify(
+                v.get("queue").issubset(queue_dict.get(k)),
+                "fail: {} is not a subset of {}, "
+                "process should use its own queues".format(
+                    v.get("queue"), queue_dict.get(k)
+                ),
+            )
+        self.logger.info("pass:each process receives packets and uses its own queue")
+        # check whether the sum of packets received by all processes is equal to the number of packets sent
+        received_pkts = sum(
+            int(v.get("rx-packets", 0)) for v in process_static.values()
+        )
+        self.verify(
+            received_pkts == pkt_num,
+            "the number of packets received is not equal to packets sent,"
+            "send packet:{}, received packet:{}".format(pkt_num, received_pkts),
+        )
+        self.logger.info(
+            "pass:the number of packets received is {}, equal to packets sent".format(
+                received_pkts
+            )
+        )
+
+    def check_rss(self, out, **kwargs):
+        """
+        check whether the packet directed by rss or not according to the specified parameters
+        :param out: information received by testpmd after sending packets and port statistics
+        :param kwargs: some specified parameters, such as: rxq, stats
+        :return: queue value list
+        usage:
+            check_rss(out, rxq=rxq, stats=stats)
+        """
+        self.logger.info("{0} check rss {0}".format(self.logfmt))
+        rxq = kwargs.get("rxq")
+        p = re.compile("RSS\shash=(\w+)\s-\sRSS\squeue=(\w+)")
+        pkt_info = p.findall(out)
+        self.verify(
+            pkt_info,
+            "no information matching the pattern was found,pattern:{}".format(
+                p.pattern
+            ),
+        )
+        pkt_queue = set([int(i[1], 16) for i in pkt_info])
+        if kwargs.get("stats"):
+            self.verify(
+                all([int(i[0], 16) % rxq == int(i[1], 16) for i in pkt_info]),
+                "some pkt not directed by rss.",
+            )
+            self.logger.info((GREEN("pass: all pkts directed by rss")))
+        else:
+            self.verify(
+                not any([int(i[0], 16) % rxq == int(i[1], 16) for i in pkt_info]),
+                "some pkt directed by rss, expect not directed by rss",
+            )
+            self.logger.info((GREEN("pass: no pkt directed by rss")))
+        return pkt_queue
+
+    def check_mark_id(self, out, check_param, **kwargs):
+        """
+        verify that the mark ID matches the expected value
+        :param out: information received by testpmd after sending packets
+        :param check_param: check item name and value, eg
+                            "check_param": {"port_id": 0, "mark_id": 1}
+        :param kwargs: some specified parameters,eg: stats
+        :return: None
+        usage:
+            check_mark_id(out, check_param, stats=stats)
+        """
+        self.logger.info("{0} check mark id {0}".format(self.logfmt))
+        fdir_scanner = re.compile("FDIR matched ID=(0x\w+)")
+        all_mark = fdir_scanner.findall(out)
+        stats = kwargs.get("stats")
+        if stats:
+            mark_list = set(int(i, 16) for i in all_mark)
+            self.verify(
+                all([i == check_param["mark_id"] for i in mark_list]) and mark_list,
+                "failed: some packet mark id of {} not match expect {}".format(
+                    mark_list, check_param["mark_id"]
+                ),
+            )
+            self.logger.info((GREEN("pass: all packets mark id are matched ")))
+        else:
+            # for mismatch packet,verify no mark id in output of received packet
+            self.verify(
+                not all_mark, "mark id {} in output, expect no mark id".format(all_mark)
+            )
+            self.logger.info((GREEN("pass: no mark id in output")))
+
+    def check_drop(self, out, **kwargs):
+        """
+        check the drop number of packets according to the specified parameters
+        :param out: information received by testpmd after sending packets and port statistics
+        :param kwargs: some specified parameters, such as: pkt_num, port_id, stats
+        :return: None
+        usage:
+            chek_drop(out, pkt_num=pkt_num, port_id=portid, stats=stats)
+        """
+        self.logger.info("{0} check drop {0}".format(self.logfmt))
+        pkt_num = kwargs.get("pkt_num")
+        stats = kwargs.get("stats")
+        res = self.get_pkt_statistic(out, **kwargs)
+        self.verify(
+            pkt_num == res["rx-total"],
+            "failed: get wrong amount of packet {}, expected {}".format(
+                res["rx-total"], pkt_num
+            ),
+        )
+        drop_packet_num = res["rx-dropped"]
+        if stats:
+            self.verify(
+                drop_packet_num == pkt_num,
+                "failed: {} packet dropped,expect {} dropped".format(
+                    drop_packet_num, pkt_num
+                ),
+            )
+            self.logger.info(
+                (
+                    GREEN(
+                        "pass: drop packet number {} is matched".format(drop_packet_num)
+                    )
+                )
+            )
+        else:
+            self.verify(
+                drop_packet_num == 0 and res["rx-packets"] == pkt_num,
+                "failed: {} packet dropped, expect 0 packet dropped".format(
+                    drop_packet_num
+                ),
+            )
+            self.logger.info(
+                (
+                    GREEN(
+                        "pass: drop packet number {} is matched".format(drop_packet_num)
+                    )
+                )
+            )
+
+    @staticmethod
+    def get_pkt_statistic(out, **kwargs):
+        """
+        :param out: information received by testpmd after sending packets and port statistics
+        :return: rx statistic dict, eg: {'rx-packets':1, 'rx-dropped':0, 'rx-total':1}
+        """
+        p = re.compile(
+            r"Forward\sstatistics\s+for\s+port\s+{}\s+.*\n.*RX-packets:\s(\d+)\s+RX-dropped:\s(\d+)\s+RX-total:\s(\d+)\s".format(
+                kwargs.get("port_id")
+            )
+        )
+        item_name = ["rx-packets", "rx-dropped", "rx-total"]
+        statistic = p.findall(out)
+        if statistic:
+            static_dict = {
+                k: v for k, v in zip(item_name, list(map(int, list(statistic[0]))))
+            }
+            return static_dict
+        else:
+            raise Exception(
+                "got wrong output, not match pattern {}".format(p.pattern).replace(
+                    "\\\\", "\\"
+                )
+            )
+
+    def send_pkt_get_output(
+        self, instance_obj, pkts, port_id=0, count=1, interval=0, get_stats=False
+    ):
+        instance_obj.pmd_output.execute_cmd("clear port stats all")
+        tx_port = self.tester_ifaces[port_id]
+        self.logger.info("----------send packet-------------")
+        self.logger.info("{}".format(pkts))
+        if not isinstance(pkts, list):
+            pkts = [pkts]
+        self.pkt.update_pkt(pkts)
+        self.pkt.send_pkt(
+            crb=self.tester,
+            tx_port=tx_port,
+            count=count,
+            interval=interval,
+        )
+        out1 = instance_obj.pmd_output.get_output(timeout=1)
+        if get_stats:
+            out2 = instance_obj.pmd_output.execute_cmd("show port stats all")
+            instance_obj.pmd_output.execute_cmd("stop")
+        else:
+            out2 = instance_obj.pmd_output.execute_cmd("stop")
+        instance_obj.pmd_output.execute_cmd("start")
+        return "".join([out1, out2])
+
+    def check_pkt_num(self, out, **kwargs):
+        """
+        check number of received packets matches the expected value
+        :param out: information received by testpmd after sending packets and port statistics
+        :param kwargs: some specified parameters, such as: pkt_num, port_id
+        :return: rx statistic dict
+        """
+        self.logger.info(
+            "{0} check pkt num for port:{1} {0}".format(
+                self.logfmt, kwargs.get("port_id")
+            )
+        )
+        pkt_num = kwargs.get("pkt_num")
+        res = self.get_pkt_statistic(out, **kwargs)
+        res_num = res["rx-total"]
+        self.verify(
+            res_num == pkt_num,
+            "fail: got wrong number of packets, expect pakcet number {}, got {}".format(
+                pkt_num, res_num
+            ),
+        )
+        self.logger.info(
+            (GREEN("pass: pkt num is {} same as expected".format(pkt_num)))
+        )
+        return res
+
+    def check_queue(self, out, check_param, **kwargs):
+        """
+        verify that queue value matches the expected value
+        :param out: information received by testpmd after sending packets and port statistics
+        :param check_param: check item name and value, eg
+                            "check_param": {"port_id": 0, "queue": 2}
+        :param kwargs: some specified parameters, such as: pkt_num, port_id, stats
+        :return:
+        """
+        self.logger.info("{0} check queue {0}".format(self.logfmt))
+        queue = check_param["queue"]
+        if isinstance(check_param["queue"], int):
+            queue = [queue]
+        patt = re.compile(
+            r"port\s+{}/queue(.+?):\s+received\s+(\d+)\s+packets".format(
+                kwargs.get("port_id")
+            )
+        )
+        res = patt.findall(out)
+        if res:
+            pkt_queue = set([int(i[0]) for i in res])
+            if kwargs.get("stats"):
+                self.verify(
+                    all(q in queue for q in pkt_queue),
+                    "fail: queue id not matched, expect queue {}, got {}".format(
+                        queue, pkt_queue
+                    ),
+                )
+                self.logger.info((GREEN("pass: queue id {} matched".format(pkt_queue))))
+            else:
+                try:
+                    self.verify(
+                        not any(q in queue for q in pkt_queue),
+                        "fail: queue id should not matched, {} should not in {}".format(
+                            pkt_queue, queue
+                        ),
+                    )
+                    self.logger.info(
+                        (GREEN("pass: queue id {} not matched".format(pkt_queue)))
+                    )
+                except VerifyFailure:
+                    self.logger.info(
+                        "queue id {} contains the queue {} specified in rule, so need to check"
+                        " whether the packet directed by rss or not".format(
+                            pkt_queue, queue
+                        )
+                    )
+                    # for mismatch packet the 'stats' parameter is False, need to change to True
+                    kwargs["stats"] = True
+                    self.check_rss(out, **kwargs)
+
+        else:
+            raise Exception("got wrong output, not match pattern")
+
+    def check_with_param(self, out, pkt_num, check_param, stats=True):
+        """
+        according to the key and value of the check parameter,
+        perform the corresponding verification in the out information
+        :param out: information received by testpmd after sending packets and port statistics
+        :param pkt_num: number of packets sent
+        :param check_param: check item name and value, eg:
+                            "check_param": {"port_id": 0, "mark_id": 1, "queue": 1}
+                            "check_param": {"port_id": 0, "drop": 1}
+        :param stats: effective status of rule, True or False, default is True
+        :return:
+        usage:
+            check_with_param(out, pkt_num, check_param, stats)
+            check_with_param(out, pkt_num, check_param=check_param)
+        """
+        rxq = check_param.get("rxq")
+        port_id = (
+            check_param["port_id"] if check_param.get("port_id") is not None else 0
+        )
+        match_flag = True
+        """
+        check_dict shows the supported check items,the key is item name and value represent the check priority,
+        the smaller the value, the higher the priority, priority default value is 999. if need to add new check item,
+        please add it to the dict and implement the corresponding method and named as 'check_itemname',eg: check_queue
+        """
+        self.matched_queue = []
+        default_pri = 999
+        check_dict = {
+            "queue": default_pri,
+            "drop": default_pri,
+            "mark_id": 1,
+            "rss": default_pri,
+        }
+        params = {"port_id": port_id, "rxq": rxq, "pkt_num": pkt_num, "stats": stats}
+        # sort check_param order by priority, from high to low, set priority as 999 if key not in check_dict
+        check_param = OrderedDict(
+            sorted(
+                check_param.items(),
+                key=lambda item: check_dict.get(item[0], default_pri),
+            )
+        )
+        if not check_param.get("drop"):
+            self.check_pkt_num(out, **params)
+        for k in check_param:
+            parameter = copy.deepcopy(params)
+            if k not in check_dict:
+                continue
+            func_name = "check_{}".format(k)
+            try:
+                func = getattr(self, func_name)
+            except AttributeError:
+                emsg = "{},this func is not implemented, please check!".format(
+                    traceback.format_exc()
+                )
+                raise Exception(emsg)
+            else:
+                # for mismatch packet, if the check item is 'rss',should also verify the packets are distributed by rss
+                if k == "rss" and not stats:
+                    parameter["stats"] = True
+                    match_flag = False
+                res = func(out=out, check_param=check_param, **parameter)
+                if k == "rss" and match_flag:
+                    self.matched_queue.append(res)
+
+    def destroy_rule(self, instance_obj, port_id=0, rule_id=None):
+        rule_id = 0 if rule_id is None else rule_id
+        if not isinstance(rule_id, list):
+            rule_id = [rule_id]
+        for i in rule_id:
+            out = instance_obj.pmd_output.execute_cmd(
+                "flow destroy {} rule {}".format(port_id, i)
+            )
+            p = re.compile(r"Flow rule #(\d+) destroyed")
+            m = p.search(out)
+            self.verify(m, "flow rule {} delete failed".format(rule_id))
+
+    def multiprocess_flow_data(self, case, **pmd_param):
+        que_num, proc_num = pmd_param.get("queue_num"), pmd_param.get("proc_num")
+        # start testpmd multi-process
+        self.launch_multi_testpmd(
+            proc_type=pmd_param.get("proc_type"),
+            queue_num=que_num,
+            process_num=proc_num,
+        )
+        self.pmd_output_list[0].execute_cmd("flow flush 0")
+        check_param = case["check_param"]
+        check_param["rxq"] = pmd_param.get("queue_num")
+        if check_param.get("rss"):
+            [pmd.execute_cmd("port config all rss all") for pmd in self.pmd_output_list]
+        fdir_pro = fdirprocess(
+            self,
+            self.pmd_output_list[0],
+            self.tester_ifaces,
+            rxq=pmd_param.get("queue_num"),
+        )
+        fdir_pro.create_rule(case.get("rule"))
+        # send match and mismatch packet
+        packets = [case.get("packet")["match"], case.get("packet")["mismatch"]]
+        for i in range(2):
+            out1 = self.send_pkt_get_output(fdir_pro, packets[i])
+            patt = re.compile(
+                r"port\s+{}/queue(.+?):\s+received\s+(\d+)\s+packets".format(
+                    check_param.get("port_id")
+                )
+            )
+            if patt.findall(out1) and check_param.get("rss"):
+                self.logger.info(
+                    "check whether the packets received by the primary process are distributed by RSS"
+                )
+                self.check_rss(out1, stats=True, **check_param)
+            for proc_pmd in self.pmd_output_list[1:]:
+                out2 = proc_pmd.get_output(timeout=1)
+                out3 = proc_pmd.execute_cmd("stop")
+                out1 = "".join([out1, out2, out3])
+                proc_pmd.execute_cmd("start")
+                if patt.findall(out2) and check_param.get("rss"):
+                    self.logger.info(
+                        "check whether the packets received by the secondary process are distributed by RSS"
+                    )
+                    self.check_rss(out2, stats=True, **check_param)
+            pkt_num = len(packets[i])
+            self.check_with_param(
+                out1,
+                pkt_num=pkt_num,
+                check_param=check_param,
+                stats=True if i == 0 else False,
+            )
+
+    def _handle_test(self, tests, instance_obj, port_id=0):
+        for test in tests:
+            if "send_packet" in test:
+                out = self.send_pkt_get_output(
+                    instance_obj, test["send_packet"], port_id
+                )
+                for proc_pmd in self.pmd_output_list[1:]:
+                    out1 = proc_pmd.get_output(timeout=1)
+                    out = "".join([out, out1])
+            if "action" in test:
+                instance_obj.handle_actions(out, test["action"])
+
+    def multiprocess_rss_data(self, case, **pmd_param):
+        que_num, proc_num = pmd_param.get("queue_num"), pmd_param.get("proc_num")
+        # start testpmd multi-process
+        self.launch_multi_testpmd(
+            proc_type=pmd_param.get("proc_type"),
+            queue_num=que_num,
+            process_num=proc_num,
+            options=pmd_param.get("options", None),
+        )
+        self.pmd_output_list[0].execute_cmd("flow flush 0")
+        rss_pro = rssprocess(
+            self,
+            self.pmd_output_list[0],
+            self.tester_ifaces,
+            rxq=pmd_param.get("queue_num"),
+        )
+        rss_pro.error_msgs = []
+        # handle tests
+        tests = case["test"]
+        port_id = case["port_id"]
+        self.logger.info("------------handle test--------------")
+        # validate rule
+        rule = case.get("rule", None)
+        if rule:
+            rss_pro.validate_rule(rule=rule)
+            rule_ids = rss_pro.create_rule(rule=rule)
+            rss_pro.check_rule(rule_list=rule_ids)
+        self._handle_test(tests, rss_pro, port_id)
+        # handle post-test
+        if "post-test" in case:
+            self.logger.info("------------handle post-test--------------")
+            self.destroy_rule(rss_pro, port_id=port_id, rule_id=rule_ids)
+            rss_pro.check_rule(port_id=port_id, stats=False)
+            self._handle_test(case["post-test"], rss_pro, port_id)
+
+        if rss_pro.error_msgs:
+            self.verify(
+                False,
+                " ".join([errs.replace("'", " ") for errs in rss_pro.error_msgs[:500]]),
+            )
+
+    def rte_flow(self, case_list, func_name, **kwargs):
+        """
+        main flow of case:
+            1. iterate the case list and do the below steps:
+                a. get the subcase name and init dict to save result
+                b. call method by func name to execute case step
+                c. record case result and err msg if case failed
+                d. clear flow rule
+            2. calculate the case passing rate according to the result dict
+            3. record case result and pass rate in the case log file
+            4. verify whether the case pass rate is equal to 100, if not, mark the case as failed and raise the err msg
+        :param case_list: case list, each item is a subcase of case
+        :param func_name: hadle case method name, eg:
+                        'flow_rule_operate': a method of 'FlowRuleProcessing' class,
+                        used to handle flow rule related suites,such as fdir and switch_filter
+                        'handle_rss_distribute_cases': a method of 'RssProcessing' class,
+                        used to handle rss related suites
+        :return:
+        usage:
+        for flow rule related:
+            rte_flow(caselist, flow_rule_operate)
+        for rss related:
+            rte_flow(caselist, handle_rss_distribute_cases)
+        """
+        if not isinstance(case_list, list):
+            case_list = [case_list]
+        test_results = dict()
+        for case in case_list:
+            case_name = case.get("sub_casename")
+            test_results[case_name] = {}
+            try:
+                self.logger.info("{0} case_name:{1} {0}".format("*" * 20, case_name))
+                func_name(case, **kwargs)
+            except Exception:
+                test_results[case_name]["result"] = "failed"
+                test_results[case_name]["err"] = re.sub(
+                    r"['\r\n]", "", str(traceback.format_exc(limit=1))
+                ).replace("\\\\", "\\")
+                self.logger.info(
+                    (
+                        RED(
+                            "case failed:{}, err:{}".format(
+                                case_name, traceback.format_exc()
+                            )
+                        )
+                    )
+                )
+            else:
+                test_results[case_name]["result"] = "passed"
+                self.logger.info((GREEN("case passed: {}".format(case_name))))
+            finally:
+                self.pmd_output_list[0].execute_cmd("flow flush 0")
+                for sess in self.session_list:
+                    self.dut.close_session(sess)
+        pass_rate = (
+            round(
+                sum(1 for k in test_results if "passed" in test_results[k]["result"])
+                / len(test_results),
+                4,
+            )
+            * 100
+        )
+        self.logger.info(
+            [
+                "{}:{}".format(sub_name, test_results[sub_name]["result"])
+                for sub_name in test_results
+            ]
+        )
+        self.logger.info("pass rate is: {}".format(pass_rate))
+        msg = [
+            "subcase_name:{}:{},err:{}".format(
+                name, test_results[name].get("result"), test_results[name].get("err")
+            )
+            for name in test_results.keys()
+            if "failed" in test_results[name]["result"]
+        ]
+        self.verify(
+            int(pass_rate) == 100,
+            "some subcases failed, detail as below:{}".format(msg),
+        )
+
+    def test_multiprocess_simple_mpbasicoperation(self):
+        """
+        Basic operation.
+        """
+        # Send message from secondary to primary
+        cores = self.dut.get_core_list("1S/2C/1T", socket=self.socket)
+        coremask = utils.create_mask(cores)
+        self.dut.send_expect(
+            self.app_simple_mp + " %s --proc-type=primary" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        time.sleep(20)
+        coremask = hex(int(coremask, 16) * 0x10).rstrip("L")
+        self.session_secondary.send_expect(
+            self.app_simple_mp + " %s --proc-type=secondary" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+
+        self.session_secondary.send_expect("send hello_primary", ">")
+        out = self.dut.get_session_output()
+        self.session_secondary.send_expect("quit", "# ")
+        self.dut.send_expect("quit", "# ")
+        self.verify(
+            "Received 'hello_primary'" in out, "Message not received on primary process"
+        )
+        # Send message from primary to secondary
+        cores = self.dut.get_core_list("1S/2C/1T", socket=self.socket)
+        coremask = utils.create_mask(cores)
+        self.session_secondary.send_expect(
+            self.app_simple_mp + " %s --proc-type=primary " % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        time.sleep(20)
+        coremask = hex(int(coremask, 16) * 0x10).rstrip("L")
+        self.dut.send_expect(
+            self.app_simple_mp + " %s --proc-type=secondary" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        self.session_secondary.send_expect("send hello_secondary", ">")
+        out = self.dut.get_session_output()
+        self.session_secondary.send_expect("quit", "# ")
+        self.dut.send_expect("quit", "# ")
+
+        self.verify(
+            "Received 'hello_secondary'" in out,
+            "Message not received on primary process",
+        )
+
+    def test_multiprocess_simple_mploadtest(self):
+        """
+        Load test of Simple MP application.
+        """
+
+        cores = self.dut.get_core_list("1S/2C/1T", socket=self.socket)
+        coremask = utils.create_mask(cores)
+        self.session_secondary.send_expect(
+            self.app_simple_mp + " %s --proc-type=primary" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        time.sleep(20)
+        coremask = hex(int(coremask, 16) * 0x10).rstrip("L")
+        self.dut.send_expect(
+            self.app_simple_mp + " %s --proc-type=secondary" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        stringsSent = 0
+        for line in open("/usr/share/dict/words", "r").readlines():
+            line = line.split("\n")[0]
+            self.dut.send_expect("send %s" % line, ">")
+            stringsSent += 1
+            if stringsSent == 3:
+                break
+
+        time.sleep(5)
+        self.dut.send_expect("quit", "# ")
+        self.session_secondary.send_expect("quit", "# ")
+
+    def test_multiprocess_simple_mpapplicationstartup(self):
+        """
+        Test use of Auto for Application Startup.
+        """
+
+        # Send message from secondary to primary (auto process type)
+        cores = self.dut.get_core_list("1S/2C/1T", socket=self.socket)
+        coremask = utils.create_mask(cores)
+        out = self.dut.send_expect(
+            self.app_simple_mp + " %s --proc-type=auto " % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        self.verify(
+            "EAL: Auto-detected process type: PRIMARY" in out,
+            "The type of process (PRIMARY) was not detected properly",
+        )
+        time.sleep(20)
+        coremask = hex(int(coremask, 16) * 0x10).rstrip("L")
+        out = self.session_secondary.send_expect(
+            self.app_simple_mp + " %s --proc-type=auto" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        self.verify(
+            "EAL: Auto-detected process type: SECONDARY" in out,
+            "The type of process (SECONDARY) was not detected properly",
+        )
+
+        self.session_secondary.send_expect("send hello_primary", ">")
+        out = self.dut.get_session_output()
+        self.session_secondary.send_expect("quit", "# ")
+        self.dut.send_expect("quit", "# ")
+        self.verify(
+            "Received 'hello_primary'" in out, "Message not received on primary process"
+        )
+
+        # Send message from primary to secondary (auto process type)
+        cores = self.dut.get_core_list("1S/2C/1T", socket=self.socket)
+        coremask = utils.create_mask(cores)
+        out = self.session_secondary.send_expect(
+            self.app_simple_mp + " %s --proc-type=auto" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        self.verify(
+            "EAL: Auto-detected process type: PRIMARY" in out,
+            "The type of process (PRIMARY) was not detected properly",
+        )
+        time.sleep(20)
+        coremask = hex(int(coremask, 16) * 0x10).rstrip("L")
+        out = self.dut.send_expect(
+            self.app_simple_mp + " %s --proc-type=auto" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        self.verify(
+            "EAL: Auto-detected process type: SECONDARY" in out,
+            "The type of process (SECONDARY) was not detected properly",
+        )
+        self.session_secondary.send_expect("send hello_secondary", ">", 100)
+        out = self.dut.get_session_output()
+        self.session_secondary.send_expect("quit", "# ")
+        self.dut.send_expect("quit", "# ")
+
+        self.verify(
+            "Received 'hello_secondary'" in out,
+            "Message not received on primary process",
+        )
+
+    def test_multiprocess_simple_mpnoflag(self):
+        """
+        Multiple processes without "--proc-type" flag.
+        """
+
+        cores = self.dut.get_core_list("1S/2C/1T", socket=self.socket)
+        coremask = utils.create_mask(cores)
+        self.session_secondary.send_expect(
+            self.app_simple_mp + " %s -m 64" % (self.eal_para),
+            "Finished Process Init",
+            100,
+        )
+        coremask = hex(int(coremask, 16) * 0x10).rstrip("L")
+        out = self.dut.send_expect(
+            self.app_simple_mp + " %s" % (self.eal_para), "# ", 100
+        )
+
+        self.verify(
+            "Is another primary process running" in out,
+            "No other primary process detected",
+        )
+
+        self.session_secondary.send_expect("quit", "# ")
+
+    def test_multiprocess_symmetric_mp_packet(self):
+        # run multiple symmetric_mp process
+        portMask = utils.create_mask(self.dut_ports)
+        # launch symmetric_mp, process num is 2
+        proc_num = 2
+        session_list = [
+            self.dut.new_session("process_{}".format(i)) for i in range(proc_num)
+        ]
+        for i in range(proc_num):
+            session_list[i].send_expect(
+                self.app_symmetric_mp
+                + " -l {} -n 4 --proc-type=auto {} -- -p {} --num-procs={} --proc-id={}".format(
+                    i + 1, self.eal_param, portMask, proc_num, i
+                ),
+                "Finished Process Init",
+            )
+        # send packets
+        packet_num = random.randint(20, 256)
+        self.logger.info("packet num:{}".format(packet_num))
+        self.random_packet(packet_num)
+        res = []
+        for session_obj in session_list:
+            try:
+                out = session_obj.send_command("^C")
+            except Exception as e:
+                self.logger.err("Error occured:{}".format(traceback.format_exc(e)))
+            finally:
+                session_obj.close()
+            rx_num = re.search(r"Port 0: RX - (?P<RX>\d+)", out)
+            rx_nums = int(rx_num.group("RX"))
+            self.verify(
+                rx_nums > 0,
+                "fail: {} received packets shoud greater than 0, actual is {}".format(
+                    session_obj.name, rx_nums
+                ),
+            )
+            res.append(rx_nums)
+        rx_total = sum(res)
+        self.logger.info("RX total:{}, send packet:{}".format(rx_total, packet_num))
+        self.verify(
+            rx_total >= packet_num,
+            "some packet not received by symmetric_mp, "
+            "number of RX total should greater than or equal to send packet",
+        )
+
+    def test_multiprocess_server_client_mp_packet(self):
+        # run multiple client_server_mp process
+        portMask = utils.create_mask(self.dut_ports)
+        # launch client_server_mp, client process num is 2
+        proc_num = 2
+        session_list = [
+            self.dut.new_session("process_{}".format(i)) for i in range(proc_num + 1)
+        ]
+        server_session = session_list[-1]
+        # start server
+        server_session.send_expect(
+            self.app_mp_server
+            + " -l 1,2 -n 4 -- -p {} -n {}".format(portMask, proc_num),
+            "Finished Process Init",
+        )
+        # start client
+        for i in range(proc_num):
+            self.dut.init_reserved_core()
+            session_list[i].send_expect(
+                self.app_mp_client
+                + " -l {} -n 4 --proc-type=auto -- -n {}".format(i + 3, i),
+                "Finished Process Init",
+            )
+        # send packets
+        packet_num = random.randint(20, 256)
+        self.logger.info("packet num:{}".format(packet_num))
+        self.random_packet(packet_num)
+        out = server_session.get_session_before(timeout=5)
+        for session_obj in session_list:
+            try:
+                session_obj.send_command("^C")
+            except Exception as e:
+                self.logger.err("Error occured:{}".format(traceback.format_exc(e)))
+            finally:
+                session_obj.close()
+        res = re.search(
+            r"Port \d+\s+-\s+rx:\s+(?P<rx>\d+)\s+tx:.*PORTS", out, re.DOTALL
+        )
+        rx_num = re.findall(r"Client\s+\d\s+-\s+rx:\s+(\d+)", res.group(0))
+        for i in range(proc_num):
+            self.verify(
+                int(rx_num[i]) > 0,
+                "fail: client_{} received packets shoud greater than 0, "
+                "actual is {}".format(i, int(rx_num[i])),
+            )
+        rx_total = sum(int(rx) for rx in rx_num)
+        self.logger.info("rx total:{}, send packet:{}".format(rx_total, packet_num))
+        self.verify(
+            rx_total >= packet_num,
+            "some packet not received by server_client process,"
+            "number of RX total should greater than or equal to send packet.",
+        )
+
+    # test testpmd multi-process
+    @check_supported_nic(support_nic)
+    def test_multiprocess_auto_process_type_detected(self):
+        # start 2 process
+        self.launch_multi_testpmd("auto", 8, 2)
+        # get output of each process and check the detected process type is correctly
+        process_type = ["PRIMARY", "SECONDARY"]
+        for i in range(2):
+            self.verify(
+                "Auto-detected process type: {}".format(process_type[i])
+                in self.out_list[i],
+                "the process type is not correctly, expect {}".format(process_type[i]),
+            )
+            self.logger.info(
+                "pass: Auto-detected {} process type correctly".format(process_type[i])
+            )
+
+    @check_supported_nic(support_nic)
+    def test_multiprocess_negative_2_primary_process(self):
+        # start 2 primary process
+        try:
+            self.launch_multi_testpmd(["primary", "primary"], 8, 2, timeout=10)
+        except Exception as e:
+            # check second process start failed
+            self.verify(
+                "Is another primary process running?" in e.output,
+                "fail: More than one primary process was started, only one should be started!",
+            )
+            self.logger.info(
+                "pass: only one primary process start successfully, match the expect"
+            )
+            return
+        self.verify(False, "fail: 2 primary process launch succeed, expect launch fail")
+
+    @check_supported_nic(support_nic)
+    def test_multiprocess_negative_exceed_process_num(self):
+        """
+        If the specified number of processes is exceeded, starting the process will fail
+        """
+        # start 2 process
+        proc_type, queue_num, process_num = "auto", 8, 2
+        self.launch_multi_testpmd(proc_type, queue_num, process_num)
+        # start a process with 'proc-id=2', should start failed
+        pmd_2 = PmdOutput(self.dut, self.dut.new_session("process_2"))
+        self.dut.init_reserved_core()
+        cores = self.dut.get_reserved_core("2C", socket=1)
+        eal_param = "--proc-type={} -a {} --log-level=ice,7".format("auto", self.pci0)
+        param = "--rxq={0} --txq={0} --num-procs={1} --proc-id={2}".format(
+            queue_num, process_num, 2
+        )
+        try:
+            pmd_2.start_testpmd(
+                cores=cores, eal_param=eal_param, param=param, timeout=10
+            )
+        except Exception as e:
+            p = re.compile(
+                r"The\s+multi-process\s+option\s+'proc-id\(\d+\)'\s+should\s+be\s+less\s+than\s+'num-procs\(\d+\)'"
+            )
+            res = p.search(e.output)
+            self.verify(
+                res,
+                "fail: 'multi-process proc-id should be less than num-process' should in output",
+            )
+            self.logger.info(
+                "pass: exceed the specified number, process launch failed as expected"
+            )
+            return
+        self.verify(
+            False,
+            "fail: exceed the specified number, process launch succeed, expect launch fail",
+        )
+
+    @check_supported_nic(support_nic)
+    def test_multiprocess_proc_type_random_packet(self):
+        case_list = [
+            {
+                "sub_casename": "proc_type_auto_4_process",
+                "queue_num": 16,
+                "proc_num": 4,
+                "proc_type": "auto",
+                "packet_type": "random",
+                "pkt_num": 30,
+            },
+            {
+                "sub_casename": "proc_type_primary_secondary_2_process",
+                "queue_num": 4,
+                "proc_num": 2,
+                "proc_type": ["primary", "secondary"],
+                "packet_type": "random",
+                "pkt_num": 20,
+            },
+        ]
+        self.rte_flow(case_list, self._multiprocess_data_pass)
+
+    @check_supported_nic(support_nic)
+    def test_multiprocess_proc_type_specify_packet(self):
+        case_list = [
+            {
+                "sub_casename": "proc_type_auto_2_process",
+                "queue_num": 8,
+                "proc_num": 2,
+                "proc_type": "auto",
+                "packet_type": "specify",
+            },
+            {
+                "sub_casename": "proc_type_primary_secondary_3_process",
+                "queue_num": 6,
+                "proc_num": 3,
+                "proc_type": ["primary", "secondary"],
+                "packet_type": "specify",
+            },
+        ]
+        self.rte_flow(case_list, self._multiprocess_data_pass)
+
+    @check_supported_nic(support_nic)
+    def test_multiprocess_with_fdir_rule(self):
+        pmd_param = {
+            "queue_num": 64,
+            "proc_num": 2,
+            "proc_type": "auto",
+        }
+        MAC_IPV4_PAY = {
+            "match": [
+                'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.21", proto=255, ttl=2, tos=4) / Raw("x" * 80)'
+            ],
+            "mismatch": [
+                'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.22", proto=255, ttl=2, tos=4) / Raw("x" * 80)',
+                'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.22",dst="192.168.0.21", proto=255, ttl=2, tos=4) / Raw("x" * 80)',
+                'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.1.21", proto=255, ttl=2, tos=4) / Raw("x" * 80)',
+                'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.21", proto=1, ttl=2, tos=4) / Raw("x" * 80)',
+                'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.21", proto=255, ttl=3, tos=4) / Raw("x" * 80)',
+                'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20",dst="192.168.0.21", proto=255, ttl=2, tos=9) / Raw("x" * 80)',
+            ],
+        }
+        mac_ipv4_pay_queue_index = {
+            "sub_casename": "mac_ipv4_pay_queue_index",
+            "rule": "flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions queue index 62 / mark id 4 / end",
+            "packet": MAC_IPV4_PAY,
+            "check_param": {"port_id": 0, "queue": 62, "mark_id": 4},
+        }
+        mac_ipv4_pay_drop = {
+            "sub_casename": "mac_ipv4_pay_drop",
+            "rule": "flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions drop / mark / end",
+            "packet": MAC_IPV4_PAY,
+            "check_param": {"port_id": 0, "drop": True},
+        }
+        mac_ipv4_pay_rss_queues = {
+            "sub_casename": "mac_ipv4_pay_rss_queues",
+            "rule": "flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 31 32 end / mark / end",
+            "packet": MAC_IPV4_PAY,
+            "check_param": {"port_id": 0, "queue": [31, 32]},
+        }
+        mac_ipv4_pay_mark_rss = {
+            "sub_casename": "mac_ipv4_pay_mark_rss",
+            "rule": "flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions mark / rss / end",
+            "packet": MAC_IPV4_PAY,
+            "check_param": {"port_id": 0, "mark_id": 0, "rss": True},
+        }
+        case_list = [
+            mac_ipv4_pay_queue_index,
+            mac_ipv4_pay_drop,
+            mac_ipv4_pay_rss_queues,
+            mac_ipv4_pay_mark_rss,
+        ]
+        self.rte_flow(case_list, self.multiprocess_flow_data, **pmd_param)
+
+    @check_supported_nic(support_nic)
+    def test_multiprocess_with_rss_toeplitz(self):
+        pmd_param = {
+            "queue_num": 32,
+            "proc_num": 2,
+            "proc_type": "auto",
+            "options": " --disable-rss --rxd=384 --txd=384",
+        }
+        mac_ipv4_tcp_toeplitz_basic_pkt = {
+            "ipv4-tcp": [
+                'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+            ],
+        }
+        mac_ipv4_tcp_toeplitz_non_basic_pkt = [
+            'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/UDP(sport=22,dport=23)/("X"*480)',
+            'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IPv6(src="ABAB:910B:6666:3457:8295:3333:1800:2929",dst="CDCD:910A:2222:5498:8475:1111:3900:2020")/TCP(sport=22,dport=23)/Raw("x"*80)',
+        ]
+        mac_ipv4_tcp_l2_src = {
+            "sub_casename": "mac_ipv4_tcp_l2_src",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types eth l2-src-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.3", src="192.168.0.5")/TCP(sport=25,dport=99)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l2_dst = {
+            "sub_casename": "mac_ipv4_tcp_l2_dst",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types eth l2-dst-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.3", src="192.168.0.5")/TCP(sport=25,dport=99)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_non_basic_pkt,
+                    "action": "check_no_hash",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l2src_l2dst = {
+            "sub_casename": "mac_ipv4_tcp_l2src_l2dst",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types eth end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.3", src="192.168.0.5")/TCP(sport=25,dport=99)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_non_basic_pkt,
+                    "action": "check_no_hash",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l3_src = {
+            "sub_casename": "mac_ipv4_tcp_l3_src",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-src-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=32,dport=33)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l3_dst = {
+            "sub_casename": "mac_ipv4_tcp_l3_dst",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-dst-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=32,dport=33)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l3src_l4src = {
+            "sub_casename": "mac_ipv4_tcp_l3src_l4src",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-src-only l4-src-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l3src_l4dst = {
+            "sub_casename": "mac_ipv4_tcp_l3src_l4dst",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-src-only l4-dst-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l3dst_l4src = {
+            "sub_casename": "mac_ipv4_tcp_l3dst_l4src",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-dst-only l4-src-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=33)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l3dst_l4dst = {
+            "sub_casename": "mac_ipv4_tcp_l3dst_l4dst",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-dst-only l4-dst-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=32,dport=23)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l4_src = {
+            "sub_casename": "mac_ipv4_tcp_l4_src",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l4-src-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.1.2")/TCP(sport=22,dport=33)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_l4_dst = {
+            "sub_casename": "mac_ipv4_tcp_l4_dst",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l4-dst-only end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.1.2")/TCP(sport=32,dport=23)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_all = {
+            "sub_casename": "mac_ipv4_tcp_all",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+        mac_ipv4_tcp_ipv4 = {
+            "sub_casename": "mac_ipv4_tcp_ipv4",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4 end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"],
+                    "action": "save_hash",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_different",
+                },
+                {
+                    "send_packet": 'Ether(src="00:11:22:33:44:53", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+                    "action": "check_hash_same",
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": [
+                        mac_ipv4_tcp_toeplitz_basic_pkt["ipv4-tcp"][0],
+                    ],
+                    "action": "check_no_hash",
+                },
+            ],
+        }
+
+        case_list = [
+            mac_ipv4_tcp_l2_src,
+            mac_ipv4_tcp_l2_dst,
+            mac_ipv4_tcp_l2src_l2dst,
+            mac_ipv4_tcp_l3_src,
+            mac_ipv4_tcp_l3_dst,
+            mac_ipv4_tcp_l3src_l4src,
+            mac_ipv4_tcp_l3src_l4dst,
+            mac_ipv4_tcp_l3dst_l4src,
+            mac_ipv4_tcp_l3dst_l4dst,
+            mac_ipv4_tcp_l4_src,
+            mac_ipv4_tcp_l4_dst,
+            mac_ipv4_tcp_all,
+            mac_ipv4_tcp_ipv4,
+        ]
+        self.rte_flow(case_list, self.multiprocess_rss_data, **pmd_param)
+
+    @check_supported_nic(support_nic)
+    def test_multiprocess_with_rss_symmetric(self):
+        pmd_param = {
+            "queue_num": 64,
+            "proc_num": 2,
+            "proc_type": "auto",
+        }
+        packets = [
+            'Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/("X"*480)',
+            'Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.2", src="192.168.0.1")/("X"*480)',
+            'Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="12.168.0.2")/TCP(sport=22,dport=23)/("X"*480)',
+            'Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.2", src="12.168.0.1")/TCP(sport=22,dport=23)/("X"*480)',
+        ]
+        mac_ipv4_symmetric = {
+            "sub_casename": "mac_ipv4_all",
+            "port_id": 0,
+            "rule": "flow create 0 ingress pattern eth / ipv4 / end actions rss func symmetric_toeplitz types ipv4 end key_len 0 queues end / end",
+            "test": [
+                {
+                    "send_packet": packets[0],
+                    "action": {"save_hash": "ipv4-nonfrag"},
+                },
+                {
+                    "send_packet": packets[1],
+                    "action": {"check_hash_same": "ipv4-nonfrag"},
+                },
+                {
+                    "send_packet": packets[2],
+                    "action": {"save_hash": "ipv4-tcp"},
+                },
+                {
+                    "send_packet": packets[3],
+                    "action": {"check_hash_same": "ipv4-tcp"},
+                },
+            ],
+            "post-test": [
+                {
+                    "send_packet": packets[0],
+                    "action": {"save_or_no_hash": "ipv4-nonfrag-post"},
+                },
+                {
+                    "send_packet": packets[1],
+                    "action": {"check_no_hash_or_different": "ipv4-nonfrag-post"},
+                },
+                {
+                    "send_packet": packets[2],
+                    "action": {"save_or_no_hash": "ipv4-tcp-post"},
+                },
+                {
+                    "send_packet": packets[3],
+                    "action": {"check_no_hash_or_different": "ipv4-tcp-post"},
+                },
+            ],
+        }
+        self.rte_flow(mac_ipv4_symmetric, self.multiprocess_rss_data, **pmd_param)
+
+    def test_perf_multiprocess_performance(self):
+        """
+        Benchmark Multiprocess performance.
+        #"""
+        packet_count = 16
+        self.dut.send_expect("fg", "# ")
+        txPort = self.tester.get_local_port(self.dut_ports[0])
+        rxPort = self.tester.get_local_port(self.dut_ports[1])
+        mac = self.tester.get_mac(txPort)
+        dmac = self.dut.get_mac_address(self.dut_ports[0])
+        tgenInput = []
+
+        # create mutative src_ip+dst_ip package
+        for i in range(packet_count):
+            package = (
+                r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]'
+                % (mac, dmac, i + 1, i + 2)
+            )
+            self.tester.scapy_append(package)
+            pcap = os.sep.join([self.output_path, "test_%d.pcap" % i])
+            self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+            tgenInput.append([txPort, rxPort, pcap])
+        self.tester.scapy_execute()
+
+        # run multiple symmetric_mp process
+        validExecutions = []
+        for execution in executions:
+            if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+                validExecutions.append(execution)
+
+        portMask = utils.create_mask(self.dut_ports)
+
+        for n in range(len(validExecutions)):
+            execution = validExecutions[n]
+            # get coreList form execution['cores']
+            coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+            # to run a set of symmetric_mp instances, like test plan
+            dutSessionList = []
+            for index in range(len(coreList)):
+                dut_new_session = self.dut.new_session()
+                dutSessionList.append(dut_new_session)
+                # add -a option when tester and dut in same server
+                dut_new_session.send_expect(
+                    self.app_symmetric_mp
+                    + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d"
+                    % (
+                        utils.create_mask([coreList[index]]),
+                        self.eal_param,
+                        portMask,
+                        execution["nprocs"],
+                        index,
+                    ),
+                    "Finished Process Init",
+                )
+
+            # clear streams before add new streams
+            self.tester.pktgen.clear_streams()
+            # run packet generator
+            streams = self.pktgen_helper.prepare_stream_from_tginput(
+                tgenInput, 100, None, self.tester.pktgen
+            )
+            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+            execution["pps"] = pps
+
+            # close all symmetric_mp process
+            self.dut.send_expect("killall symmetric_mp", "# ")
+            # close all dut sessions
+            for dut_session in dutSessionList:
+                self.dut.close_session(dut_session)
+
+        # get rate and mpps data
+        for n in range(len(executions)):
+            self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+        self.result_table_create(
+            [
+                "Num-procs",
+                "Sockets/Cores/Threads",
+                "Num Ports",
+                "Frame Size",
+                "%-age Line Rate",
+                "Packet Rate(mpps)",
+            ]
+        )
+
+        for execution in validExecutions:
+            self.result_table_add(
+                [
+                    execution["nprocs"],
+                    execution["cores"],
+                    2,
+                    64,
+                    execution["pps"] / float(100000000 / (8 * 84)),
+                    execution["pps"] / float(1000000),
+                ]
+            )
+
+        self.result_table_print()
+
+    def test_perf_multiprocess_client_serverperformance(self):
+        """
+        Benchmark Multiprocess client-server performance.
+        """
+        self.dut.kill_all()
+        self.dut.send_expect("fg", "# ")
+        txPort = self.tester.get_local_port(self.dut_ports[0])
+        rxPort = self.tester.get_local_port(self.dut_ports[1])
+        mac = self.tester.get_mac(txPort)
+
+        self.tester.scapy_append(
+            'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0])
+        )
+        self.tester.scapy_append('smac="%s"' % mac)
+        self.tester.scapy_append(
+            'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]'
+        )
+
+        pcap = os.sep.join([self.output_path, "test.pcap"])
+        self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+        self.tester.scapy_execute()
+
+        validExecutions = []
+        for execution in executions:
+            if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+                validExecutions.append(execution)
+
+        for execution in validExecutions:
+            coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+            # get core with socket parameter to specified which core dut used when tester and dut in same server
+            coreMask = utils.create_mask(
+                self.dut.get_core_list("1S/1C/1T", socket=self.socket)
+            )
+            portMask = utils.create_mask(self.dut_ports)
+            # specified mp_server core and add -a option when tester and dut in same server
+            self.dut.send_expect(
+                self.app_mp_server
+                + " -n %d -c %s %s -- -p %s -n %d"
+                % (
+                    self.dut.get_memory_channels(),
+                    coreMask,
+                    self.eal_param,
+                    portMask,
+                    execution["nprocs"],
+                ),
+                "Finished Process Init",
+                20,
+            )
+            self.dut.send_expect("^Z", "\r\n")
+            self.dut.send_expect("bg", "# ")
+
+            for n in range(execution["nprocs"]):
+                time.sleep(5)
+                # use next core as mp_client core, different from mp_server
+                coreMask = utils.create_mask([str(int(coreList[n]) + 1)])
+                self.dut.send_expect(
+                    self.app_mp_client
+                    + " -n %d -c %s --proc-type=secondary %s -- -n %d"
+                    % (self.dut.get_memory_channels(), coreMask, self.eal_param, n),
+                    "Finished Process Init",
+                )
+                self.dut.send_expect("^Z", "\r\n")
+                self.dut.send_expect("bg", "# ")
+
+            tgenInput = []
+            tgenInput.append([txPort, rxPort, pcap])
+
+            # clear streams before add new streams
+            self.tester.pktgen.clear_streams()
+            # run packet generator
+            streams = self.pktgen_helper.prepare_stream_from_tginput(
+                tgenInput, 100, None, self.tester.pktgen
+            )
+            _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+            execution["pps"] = pps
+            self.dut.kill_all()
+            time.sleep(5)
+
+        for n in range(len(executions)):
+            self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+
+        self.result_table_create(
+            [
+                "Server threads",
+                "Server Cores/Threads",
+                "Num-procs",
+                "Sockets/Cores/Threads",
+                "Num Ports",
+                "Frame Size",
+                "%-age Line Rate",
+                "Packet Rate(mpps)",
+            ]
+        )
+
+        for execution in validExecutions:
+            self.result_table_add(
+                [
+                    1,
+                    "1S/1C/1T",
+                    execution["nprocs"],
+                    execution["cores"],
+                    2,
+                    64,
+                    execution["pps"] / float(100000000 / (8 * 84)),
+                    execution["pps"] / float(1000000),
+                ]
+            )
+
+        self.result_table_print()
+
+    def set_fields(self):
+        """set ip protocol field behavior"""
+        fields_config = {
+            "ip": {
+                "src": {"range": 64, "action": "inc"},
+                "dst": {"range": 64, "action": "inc"},
+            },
+        }
+
+        return fields_config
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        if self.session_list:
+            for sess in self.session_list:
+                self.dut.close_session(sess)
+        self.dut.kill_all()
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        self.dut.kill_all()
+        pass
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: add vf multiprocess test case
  2022-05-25  8:52 [dts] [PATCH V5 1/2] tests/multiprocess_iavf: add vf multiprocess test case Jiale Song
@ 2022-05-25  8:52 ` Jiale Song
  2022-05-25  9:03   ` Jiale, SongX
  2022-05-25 11:12   ` lijuan.tu
  0 siblings, 2 replies; 4+ messages in thread
From: Jiale Song @ 2022-05-25  8:52 UTC (permalink / raw)
  To: dts; +Cc: Jiale Song

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 46433 bytes --]

we not have iavf multiprocess cases in dts.
add new vf multiprocess test suite and 14 new test cases.

Signed-off-by: Jiale Song <songx.jiale@intel.com>
---
 test_plans/index.rst                       |   1 +
 test_plans/multiprocess_iavf_test_plan.rst | 950 +++++++++++++++++++++
 2 files changed, 951 insertions(+)
 create mode 100644 test_plans/multiprocess_iavf_test_plan.rst

diff --git a/test_plans/index.rst b/test_plans/index.rst
index 5867b07b..c1f0054f 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -277,6 +277,7 @@ The following are the test plans for the DPDK DTS automated test system.
     hello_world_test_plan
     keep_alive_test_plan
     multiprocess_test_plan
+    multiprocess_iavf_test_plan
     rxtx_callbacks_test_plan
     skeleton_test_plan
     timer_test_plan
diff --git a/test_plans/multiprocess_iavf_test_plan.rst b/test_plans/multiprocess_iavf_test_plan.rst
new file mode 100644
index 00000000..0adf4bea
--- /dev/null
+++ b/test_plans/multiprocess_iavf_test_plan.rst
@@ -0,0 +1,950 @@
+.. Copyright (c) <2022>, Intel Corporation
+   All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+=======================================
+Sample Application Tests: Multi-Process
+=======================================
+
+Simple MP Application Test
+==========================
+
+Description
+-----------
+
+This test is a basic multi-process test for iavf which demonstrates the basics of sharing
+information between DPDK processes. The same application binary is run
+twice - once as a primary instance, and once as a secondary instance. Messages
+are sent from primary to secondary and vice versa, demonstrating the processes
+are sharing memory and can communicate using rte_ring structures.
+
+Prerequisites
+-------------
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+   modprobe vfio
+   modprobe vfio-pci
+   usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+Assuming that a DPDK build has been set up and the multi-process sample
+applications have been built.
+
+Test Case: Basic operation
+--------------------------
+
+1. To run the application, start one copy of the simple_mp binary in one terminal,
+   passing at least two cores in the coremask, as follows::
+
+       ./x86_64-native-linuxapp-gcc/examples/dpdk-simple_mp -c 3 --proc-type=primary
+
+   The process should start successfully and display a command prompt as follows::
+
+       $ ./x86_64-native-linuxapp-gcc/examples/dpdk-simple_mp -c 3 --proc-type=primary
+       EAL: coremask set to 3
+       EAL: Detected lcore 0 on socket 0
+       EAL: Detected lcore 1 on socket 0
+       EAL: Detected lcore 2 on socket 0
+       EAL: Detected lcore 3 on socket 0
+       ...
+       EAL: Requesting 2 pages of size 1073741824
+       EAL: Requesting 768 pages of size 2097152
+       EAL: Ask a virtual area of 0x40000000 bytes
+       EAL: Virtual area found at 0x7ff200000000 (size = 0x40000000)
+       ...
+       EAL: check igb_uio module
+       EAL: check module finished
+       EAL: Master core 0 is ready (tid=54e41820)
+       EAL: Core 1 is ready (tid=53b32700)
+       Starting core 1
+
+       simple_mp >
+
+2. To run the secondary process to communicate with the primary process, again run the
+   same binary setting at least two cores in the coremask.::
+
+       ./x86_64-native-linuxapp-gcc/examples/dpdk-simple_mp -c C --proc-type=secondary
+
+   Once the process type is specified correctly, the process starts up, displaying largely
+   similar status messages to the primary instance as it initializes. Once again, you will be
+   presented with a command prompt.
+
+3. Once both processes are running, messages can be sent between them using the send
+   command. At any stage, either process can be terminated using the quit command.
+
+   Validate that this is working by sending a message between each process, both from
+   primary to secondary and back again. This is shown below.
+
+   Transcript from the primary - text entered by used shown in ``{}``::
+
+       EAL: Master core 10 is ready (tid=b5f89820)
+       EAL: Core 11 is ready (tid=84ffe700)
+       Starting core 11
+       simple_mp > {send hello_secondary}
+       simple_mp > core 11: Received 'hello_primary'
+       simple_mp > {quit}
+
+   Transcript from the secondary - text entered by the user is shown in ``{}``::
+
+       EAL: Master core 8 is ready (tid=864a3820)
+       EAL: Core 9 is ready (tid=85995700)
+       Starting core 9
+       simple_mp > core 9: Received 'hello_secondary'
+       simple_mp > {send hello_primary}
+       simple_mp > {quit}
+
+Test Case: Load test of Simple MP application
+---------------------------------------------
+
+1. Start up the sample application using the commands outlined in steps 1 & 2
+   above.
+
+2. To load test, send a large number of strings (>5000), from the primary instance
+   to the secondary instance, and then from the secondary instance to the primary.
+   [NOTE: A good source of strings to use is /usr/share/dict/words which contains
+   >400000 ascii strings on Fedora 14]
+
+Test Case: Test use of Auto for Application Startup
+---------------------------------------------------
+
+1. Start the primary application as in Test 1, Step 1, except replace
+   ``--proc-type=primary`` with ``--proc-type=auto``
+
+2. Validate that the application prints the line:
+   ``EAL: Auto-detected process type: PRIMARY`` on startup.
+
+3. Start the secondary application as in Test 1, Step 2, except replace
+   ``--proc-type=secondary`` with ``--proc-type=auto``.
+
+4. Validate that the application prints the line:
+   ``EAL: Auto-detected process type: SECONDARY`` on startup.
+
+5. Verify that processes can communicate by sending strings, as in Test 1,
+   Step 3.
+
+Test Case: Test running multiple processes without "--proc-type" flag
+---------------------------------------------------------------------
+
+1. Start up the primary process as in Test 1, Step 1, except omit the
+   ``--proc-type`` flag completely.
+
+2. Validate that process starts up as normal, and returns the ``simple_mp>``
+   prompt.
+
+3. Start up the secondary process as in Test 1, Step 2, except omit the
+   ``--proc-type`` flag.
+
+4. Verify that the process *fails* to start and prints an error message as
+   below::
+
+      "PANIC in rte_eal_config_create():
+      Cannot create lock on '/path/to/.rte_config'. Is another primary process running?"
+
+Symmetric MP Application Test
+=============================
+
+Description
+-----------
+
+This test is a multi-process test which demonstrates how multiple processes can
+work together to perform packet I/O and packet processing in parallel, much as
+other example application work by using multiple threads. In this example, each
+process reads packets from all network ports being used - though from a different
+RX queue in each case. Those packets are then forwarded by each process which
+sends them out by writing them directly to a suitable TX queue.
+
+Prerequisites
+-------------
+
+Assuming that an Intel DPDK build has been set up and the multi-process sample
+applications have been built. It is also assumed that a traffic generator has
+been configured and plugged in to the NIC ports 0 and 1.
+
+Test Methodology
+----------------
+
+As with the simple_mp example, the first instance of the symmetric_mp process
+must be run as the primary instance, though with a number of other application
+specific parameters also provided after the EAL arguments. These additional
+parameters are:
+
+* -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the
+  system are to be used. For example: -p 3 to use ports 0 and 1 only.
+* --num-procs <N>, where N is the total number of symmetric_mp instances that
+  will be run side-by-side to perform packet processing. This parameter is used to
+  configure the appropriate number of receive queues on each network port.
+* --proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of
+  processes, specified above). This identifies which symmetric_mp instance is being
+  run, so that each process can read a unique receive queue on each network port.
+
+The secondary symmetric_mp instances must also have these parameters specified,
+and the first two must be the same as those passed to the primary instance, or errors
+result.
+
+For example, to run a set of four symmetric_mp instances, running on lcores 1-4, all
+performing level-2 forwarding of packets between ports 0 and 1, the following
+commands can be used (assuming run as root)::
+
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 2 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 8 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2
+   ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 10 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3
+
+To run only 1 or 2 instances, the above parameters to the 1 or 2 instances being
+run should remain the same, except for the ``num-procs`` value, which should be
+adjusted appropriately.
+
+
+Test Case: Function Tests
+-------------------------
+start 2 symmetric_mp process, send some packets, the number of packets is a random value between 20 and 256.
+summarize all received packets and check whether it is bigger than or equal to the number of sent packets
+
+1. start 2 process::
+
+    /dpdk-symmetric_mp  -l 1 -n 4 --proc-type=auto  -a 0000:05:01.0 -a 0000:05:01.1 -- -p 0x3 --num-procs=2 --proc-id=0
+    /dpdk-symmetric_mp  -l 2 -n 4 --proc-type=auto  -a 0000:05:01.0 -a 0000:05:01.1 -- -p 0x3 --num-procs=2 --proc-id=1
+
+2. send some packets,the number of packets is a random value between 20 and 256, packet type including IPV6/4,TCP/UDP,
+   refer to Random_Packet
+
+3. stop all process and check output::
+
+    the number of received packets for each process should bigger than 0.
+    summarize all received packets for all process should bigger than or equal to the number of sent packets
+
+
+Client Server Multiprocess Tests
+================================
+
+Description
+-----------
+
+The client-server sample application demonstrates the ability of Intel� DPDK
+to use multiple processes in which a server process performs packet I/O and one
+or multiple client processes perform packet processing. The server process
+controls load balancing on the traffic received from a number of input ports to
+a user-specified number of clients. The client processes forward the received
+traffic, outputting the packets directly by writing them to the TX rings of the
+outgoing ports.
+
+Prerequisites
+-------------
+
+Assuming that an Intel� DPDK build has been set up and the multi-process
+sample application has been built.
+Also assuming a traffic generator is connected to the ports "0" and "1".
+
+It is important to run the server application before the client application,
+as the server application manages both the NIC ports with packet transmission
+and reception, as well as shared memory areas and client queues.
+
+Run the Server Application:
+
+- Provide the core mask on which the server process is to run using -c, e.g. -c 3 (bitmask number).
+- Set the number of ports to be engaged using -p, e.g. -p 3 refers to ports 0 & 1.
+- Define the maximum number of clients using -n, e.g. -n 8.
+
+The command line below is an example on how to start the server process on
+logical core 2 to handle a maximum of 8 client processes configured to
+run on socket 0 to handle traffic from NIC ports 0 and 1::
+
+    root@host:mp_server# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_server -c 2 -- -p 3 -n 8
+
+NOTE: If an additional second core is given in the coremask to the server process
+that second core will be used to print statistics. When benchmarking, only a
+single lcore is needed for the server process
+
+Run the Client application:
+
+- In another terminal run the client application.
+- Give each client a distinct core mask with -c.
+- Give each client a unique client-id with -n.
+
+An example commands to run 8 client processes is as follows::
+
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40 --proc-type=secondary -- -n 0 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100 --proc-type=secondary -- -n 1 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 400 --proc-type=secondary -- -n 2 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 1000 --proc-type=secondary -- -n 3 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 4000 --proc-type=secondary -- -n 4 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 10000 --proc-type=secondary -- -n 5 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 &
+   root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 &
+
+Test Case: Function Tests
+-------------------------
+start server process and 2 client process, send some packets, the number of packets is a random value between 20 and 256.
+summarize all received packets and check whether it is bigger than or equal to the number of sent packets
+
+1. start server process::
+
+    ./dpdk-mp_server  -l 1,2 -n 4 -a 0000:05:01.0 -a 0000:05:01.1 -- -p 0x3 -n 2
+
+2. start 2 client process::
+
+    ./dpdk-mp_client  -l 3 -n 4 -a 0000:05:01.0 -a 0000:05:01.1 --proc-type=auto -- -n 0
+    ./dpdk-mp_client  -l 4 -n 4 -a 0000:05:01.0 -a 0000:05:01.1 --proc-type=auto -- -n 1
+
+3. send some packets,the number of packets is a random value between 20 and 256, packet type include IPV6/4,TCP/UDP,
+   refer to Random_Packet
+
+4. stop all process and check output::
+
+    the number of received packets for each client should bigger than 0.
+    summarize all received packets for all clients should bigger than or equal to the number of sent packets
+
+Testpmd Multi-Process Test
+==========================
+
+Description
+-----------
+
+This is a multi-process test for Testpmd application, which demonstrates how multiple processes can
+work together to perform packet in parallel.
+
+Test Methodology
+----------------
+Testpmd support to specify total number of processes and current process ID.
+Each process owns subset of Rx and Tx queues
+The following are the command-line options for testpmd multi-process support::
+
+   primary process:
+   ./dpdk-testpmd -a xxx --proc-type=auto -l 0-1 -- -i --rxq=4 --txq=4 --num-procs=2 --proc-id=0
+
+   secondary process:
+   ./dpdk-testpmd -a xxx --proc-type=auto -l 2-3 -- -i --rxq=4 --txq=4 --num-procs=2 --proc-id=1
+
+   --num-procs:
+      The number of processes which will be used
+   --proc-id:
+      The ID of the current process (ID < num-procs),ID should be different in primary process and secondary
+      process, which starts from ‘0’.
+
+All queues are allocated to different processes based on proc_num and proc_id
+Calculation rule for queue::
+
+   start(queue start id) = proc_id * nb_q / num_procs
+   end(queue end id) = start + nb_q / num_procs
+
+For example, if testpmd is configured to have 4 Tx and Rx queues, queues 0 and 1 will be used by the primary process and
+queues 2 and 3 will be used by the secondary process.
+
+Note::
+
+   nb_q is the number of queue
+   The number of queues should be a multiple of the number of processes. If not, redundant queues will exist after
+   queues are allocated to processes. If RSS is enabled, packet loss occurs when traffic is sent to all processes at the
+   same time.Some traffic goes to redundant queues and cannot be forwarded.
+   All the dev ops is supported in primary process. While secondary process is not permitted to allocate or release
+   shared memory.
+   When secondary is running, port in primary is not permitted to be stopped.
+   Reconfigure operation is only valid in primary.
+   Stats is supported, stats will not change when one quits and starts, as they share the same buffer to store the stats.
+   Flow rules are maintained in process level:
+      primary and secondary has its own flow list (but one flow list in HW). The two can see all the queues, so setting
+      the flow rules for the other is OK. But in the testpmd primary process receiving or transmitting packets from the
+      queue allocated for secondary process is not permitted, and same for secondary process
+
+   Flow API and RSS are supported
+
+Prerequisites
+-------------
+
+1. Hardware:
+   columbiaville_25g/columbiaville_100g
+
+2. Software:
+   DPDK: http://dpdk.org/git/dpdk
+   scapy: http://www.secdev.org/projects/scapy/
+
+3. Copy specific ice package to /lib/firmware/intel/ice/ddp/ice.pkg
+
+4. Generate 2 VFs on PF and set mac address for vf0::
+
+    echo 2 > /sys/bus/pci/devices/0000:af:00.0/sriov_numvfs
+    ip link set eth7 vf 0 mac 00:11:22:33:44:55
+
+   0000:05:00.0 generate 0000:05:01.0 and 0000:05:01.1
+
+4. Bind VFs to dpdk driver::
+
+    ./usertools/dpdk-devbind.py -b vfio-pci 0000:05:01.0  0000:05:01.1
+
+Default parameters
+------------------
+
+   MAC::
+
+    [Dest MAC]: 00:11:22:33:44:55
+
+   IPv4::
+
+    [Source IP]: 192.168.0.20
+    [Dest IP]: 192.168.0.21
+    [IP protocol]: 255
+    [TTL]: 2
+    [DSCP]: 4
+
+   TCP::
+
+    [Source Port]: 22
+    [Dest Port]: 23
+
+   Random_Packet::
+
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IPv6(src='::192.168.0.1', version=6, tc=0, fl=0, dst='::192.168.1.1', hlim=64)/TCP(sport=65535, dport=65535, flags=0)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IP(frag=0, src='192.168.0.1', tos=0, dst='192.168.1.2', version=4, ttl=64, id=1)/UDP(sport=65535, dport=65535)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IPv6(src='::192.168.0.1', version=6, tc=0, fl=0, dst='::192.168.1.3', hlim=64)/UDP(sport=65535, dport=65535)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IPv6(src='::192.168.0.1', version=6, tc=0, fl=0, dst='::192.168.1.4', hlim=64)/UDP(sport=65535, dport=65535)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IPv6(src='::192.168.0.1', version=6, tc=0, fl=0, dst='::192.168.1.5', hlim=64)/TCP(sport=65535, dport=65535, flags=0)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IP(frag=0, src='192.168.0.1', tos=0, dst='192.168.1.15', version=4, ttl=64, id=1)/UDP(sport=65535, dport=65535)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IPv6(src='::192.168.0.1', version=6, tc=0, fl=0, dst='::192.168.1.16', hlim=64)/TCP(sport=65535, dport=65535, flags=0)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IPv6(src='::192.168.0.1', version=6, tc=0, fl=0, dst='::192.168.1.27', hlim=64)/TCP(sport=65535, dport=65535, flags=0)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IP(frag=0, src='192.168.0.1', tos=0, dst='192.168.1.28', version=4, ttl=64, id=1)/TCP(sport=65535, dport=65535, flags=0)/Raw(),
+    Ether(dst='00:11:22:33:44:55', src='00:00:20:00:00:00')/IPv6(src='::192.168.0.1', version=6, tc=0, fl=0, dst='::192.168.1.30', hlim=64)/TCP(sport=65535, dport=65535, flags=0)/Raw()
+
+Test Case: multiprocess proc_type random packet
+===============================================
+
+Subcase 1: proc_type_auto_4_process
+-----------------------------------
+1. Launch the app ``testpmd``, start 4 process with rxq/txq set as 16 (proc_id:0~3, queue id:0~15) with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=16 --txq=16 --num-procs=4 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=16 --txq=16 --num-procs=4 --proc-id=1
+   ./dpdk-testpmd -l 5,6 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=16 --txq=16 --num-procs=4 --proc-id=2
+   ./dpdk-testpmd -l 7,8 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=16 --txq=16 --num-procs=4 --proc-id=3
+
+2. Send 20 random packets::
+
+    packets generated by script, packet type including 'TCP', 'UDP', 'IPv6_TCP', 'IPv6_UDP', like as: Random_Packet
+
+3. Check whether each process receives 5 packets with the corresponding queue::
+
+    process 0 should receive 5 packets with queue 0~3
+    process 1 should receive 5 packets with queue 4~7
+    process 2 should receive 5 packets with queue 8~11
+    process 3 should receive 5 packets with queue 12~15
+
+4. Check the statistics is correctly, the total number of packets received is 20
+
+Subcase 2: proc_type_primary_secondary_2_process
+------------------------------------------------
+1. Launch the app ``testpmd``, start 2 process with rxq/txq set as 4 (proc_id:0~1, queue id:0~3) with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=primary -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=4 --txq=4 --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=secondary -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=4 --txq=4 --num-procs=2 --proc-id=1
+
+2. Send 20 random packets::
+
+    packets generated by script, packet type including 'TCP', 'TCP', 'IPv6_TCP', 'IPv6_UDP', such as: Random_Packet
+
+3. Check whether each process receives 10 packets with the corresponding queue::
+
+    process 0 should receive 10 packets with queue 0~1
+    process 1 should receive 10 packets with queue 2~3
+
+
+4. Check the statistics is correctly, the total number of packets received is 20
+
+Test Case: multiprocess proc_type specify packet
+================================================
+
+Subcase 1: proc_type_auto_2_process
+-----------------------------------
+1. Launch the app ``testpmd``, start 2 process with rxq/txq set as 8 (proc_id:0~1, queue id:0~7) with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=1
+
+2. Create rule to set queue as one of each process queues::
+
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20  / end actions queue index 0 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.1.20  / end actions queue index 1 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.2.20 / end actions queue index 2 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.3.20 / end actions queue index 3 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.4.20  / end actions queue index 4 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.5.20 / end actions queue index 5 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.6.20 / end actions queue index 6 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.7.20 / end actions queue index 7 / end
+
+3. Send 1 matched packet for each rule::
+
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.1.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.2.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.3.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.4.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.5.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.6.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.7.20")/("X"*46)
+
+4. Check whether each process receives 4 packets with the corresponding queue::
+
+    process 0 should receive 4 packets with queue 0~3
+    process 1 should receive 4 packets with queue 4~7
+
+5. Check the statistics is correctly, the total number of packets received is 8
+
+Subcase 2: proc_type_primary_secondary_3_process
+------------------------------------------------
+1. Launch the app ``testpmd``, start 3 process with rxq/txq set as 6 (proc_id:0~2, queue id:0~5) with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=6 --txq=6 --num-procs=3 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=6 --txq=6 --num-procs=3 --proc-id=1
+   ./dpdk-testpmd -l 5,6 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=6 --txq=6 --num-procs=3 --proc-id=2
+
+2. Create rule to set queue as one of each process queues::
+
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20  / end actions queue index 0 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.1.20  / end actions queue index 1 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.2.20 / end actions queue index 2 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.3.20 / end actions queue index 3 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.4.20  / end actions queue index 4 / end
+    flow create 0 ingress pattern eth / ipv4 src is 192.168.5.20 / end actions queue index 5 / end
+
+3. Send 1 matched packet for each rule::
+
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.1.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.2.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.3.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.4.20")/("X"*46)
+    Ether(dst="00:11:22:33:44:55")/IP(src="192.168.5.20")/("X"*46)
+
+4. Check whether each process receives 2 packets with the corresponding queue::
+
+    process 0 should receive 2 packets with queue 0~1
+    process 1 should receive 2 packets with queue 2~3
+    process 2 should receive 2 packets with queue 4~5
+
+5. Check the statistics is correctly, the total number of packets received is 6
+
+Test Case: test_multiprocess_with_fdir_rule
+===========================================
+
+Launch the app ``testpmd``, start 2 process with rxq/txq set as 16 (proc_id:0~1, queue id:0~15) with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 -n 4 -a 0000:05:01.0 --proc-type=auto  --log-level=ice,7 -- -i --rxq=16 --txq=16  --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 -n 4 -a 0000:05:01.0 --proc-type=auto  --log-level=ice,7 -- -i --rxq=16 --txq=16  --num-procs=2 --proc-id=1
+
+Subcase 1: mac_ipv4_pay_queue_index
+-----------------------------------
+
+1. Create rule::
+
+    flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions queue index 6 / mark id 4 / end
+
+2. Send matched packets, check the packets is distributed to queue 6 with FDIR matched ID=0x4.
+   Send unmatched packets, check the packets are distributed by RSS without FDIR matched ID
+
+3. Verify rules can be listed and destroyed::
+
+    testpmd> flow list 0
+
+   check the rule listed.
+   destroy the rule::
+
+    testpmd> flow destroy 0 rule 0
+
+4. Verify matched packet is distributed by RSS without FDIR matched ID.
+   check there is no rule listed.
+
+Subcase 2: mac_ipv4_pay_rss_queues
+----------------------------------
+1. Create rule::
+
+    flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 10 11 end / mark / end
+
+2. Send matched packets, check the packets is distributed to queue 10 or 11.
+   Send unmatched packets, check the packets are distributed by RSS
+
+3. Repeat step 3 of subcase 1
+
+4. Verify matched packet is distributed by RSS.
+   check there is no rule listed.
+
+Subcase 3: mac_ipv4_pay_drop
+----------------------------
+
+1. Create rule::
+
+    flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions drop / mark / end
+
+2. Send matched packets, check the packets are dropped.
+   Send unmatched packets, check the packets are not dropped
+
+3. Repeat step 3 of subcase 1
+
+4. Verify matched packets are not dropped.
+   check there is no rule listed.
+
+Subcase 4: mac_ipv4_pay_mark_rss
+--------------------------------
+1. Create rule::
+
+    flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions mark / rss / end
+
+2. Send matched packets, check the packets are distributed by RSS with FDIR matched ID=0x0.
+   Send unmatched packets, check the packets are distributed by RSS without FDIR matched ID
+
+3. Repeat step 3 of subcase 1
+
+4. Verify matched packets are distributed to the same queue without FDIR matched ID.
+   check there is no rule listed.
+
+Note: step2 and step4 need to check whether all received packets of each process are distributed by RSS
+
+
+Test Case: test_multiprocess_with_rss_toeplitz
+==============================================
+Launch the app ``testpmd``,start 2 process with queue num set as 16 (proc_id: 0~1, queue id: 0~15) with the following arguments::
+
+    ./dpdk-testpmd -l 1,2 -n 4 -a 0000:05:01.0 --proc-type=auto  --log-level=ice,7 -- -i --rxq=16 --txq=16  --num-procs=2 --proc-id=0
+    ./dpdk-testpmd -l 3,4 -n 4 -a 0000:05:01.0 --proc-type=auto  --log-level=ice,7 -- -i --rxq=16 --txq=16  --num-procs=2 --proc-id=1
+
+all the test cases run the same test steps as below::
+
+    1. validate rule.
+    2. create rule and list rule.
+    3. send a basic hit pattern packet,record the hash value,
+       check the packet is distributed to queues by RSS.
+    4. send hit pattern packet with changed input set in the rule.
+       check the received packet have different hash value with basic packet.
+       check the packet is distributed to queues by rss.
+    5. send hit pattern packet with changed input set not in the rule.
+       check the received packet have same hash value with the basic packet.
+       check the packet is distributed to queues by rss.
+    6. destroy the rule and list rule.
+    7. send same packet with step 3.
+       check the received packets have no hash value, and distributed to queue 0.
+
+    Note: step3, step4 and step5 need to check whether all received packets of each process are distributed by RSS
+
+basic hit pattern packets are the same in this test case.
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+Subcase 1: mac_ipv4_tcp_l2_src
+------------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types eth l2-src-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E1")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:27:E0")/IP(dst="192.168.0.3", src="192.168.0.5")/TCP(sport=25,dport=99)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l2_dst
+----------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types eth l2-dst-only end key_len 0 queues end / end
+
+2. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.3", src="192.168.0.5")/TCP(sport=25,dport=99)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l2src_l2dst
+---------------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types eth end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", dst="68:05:CA:BB:26:E1")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.3", src="192.168.0.5")/TCP(sport=25,dport=99)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l3_src
+----------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-src-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=32,dport=33)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l3_dst
+----------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-dst-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=32,dport=33)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l3src_l4src
+---------------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-src-only l4-src-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l3src_l4dst
+---------------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-src-only l4-dst-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l3dst_l4src
+---------------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-dst-only l4-src-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=33)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l3dst_l4dst
+---------------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l3-dst-only l4-dst-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=32,dport=23)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l4_src
+----------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l4-src-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", dst="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.1.2")/TCP(sport=22,dport=33)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_l4_dst
+----------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp l4-dst-only end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.1.1", src="192.168.1.2")/TCP(sport=32,dport=23)/("X"*480)],iface="ens786f0")
+
+Subcase: mac_ipv4_tcp_ipv4
+--------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4 end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=33)/("X"*480)],iface="enp134s0f0")
+
+Subcase: mac_ipv4_tcp_all
+-------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types ipv4-tcp end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=33)/("X"*480)],iface="ens786f0")
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=32,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.1.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.1.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+3. hit pattern/not defined input set:
+ipv4-tcp packets::
+
+    sendp([Ether(src="00:11:22:33:44:55", dst="68:05:CA:BB:27:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+Test Case: test_multiprocess_with_rss_symmetric
+===============================================
+Launch the app ``testpmd``, start 2 process with queue num set as 16(proc_id: 0~1, queue id: 0~15) with the following arguments::
+
+    ./dpdk-testpmd -l 1,2 -n 4 -a 0000:05:01.0 --proc-type=auto  --log-level=ice,7 -- -i --rxq=16 --txq=16  --num-procs=2 --proc-id=0
+    ./dpdk-testpmd -l 3,4 -n 4 -a 0000:05:01.0 --proc-type=auto  --log-level=ice,7 -- -i --rxq=16 --txq=16  --num-procs=2 --proc-id=1
+
+test steps as below::
+
+    1. validate and create rule.
+    2. set "port config all rss all".
+    3. send hit pattern packets with switched value of input set in the rule.
+       check the received packets have the same hash value.
+       check all the packets are distributed to queues by rss
+    4. destroy the rule and list rule.
+    5. send same packets with step 3
+       check the received packets have no hash value, or have different hash value.
+
+    Note: step3 needs to check whether all received packets of each process are distributed by RSS
+
+Subcase: mac_ipv4_symmetric
+---------------------------
+1. create rss rule::
+
+    flow create 0 ingress pattern eth / ipv4 / end actions rss func symmetric_toeplitz types ipv4 end key_len 0 queues end / end
+
+2. hit pattern/defined input set:
+ipv4-nonfrag packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/("X"*480)],iface="ens786f0")
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.2", src="192.168.0.1")/("X"*480)],iface="ens786f0")
+
+ipv4-frag packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2",frag=6)/("X"*480)],iface="ens786f0")
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.2", src="192.168.0.1",frag=6)/("X"*480)],iface="ens786f0")
+
+ipv4-tcp packets::
+
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.1", src="192.168.0.2")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+    sendp([Ether(dst="00:11:22:33:44:55", src="68:05:CA:BB:26:E0")/IP(dst="192.168.0.2", src="192.168.0.1")/TCP(sport=22,dport=23)/("X"*480)],iface="ens786f0")
+
+Test Case: test_multiprocess_auto_process_type_detected
+=======================================================
+1. start 2 process with queue num set as 8 (proc_id:0~1,queue id:0~7)::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=1
+
+2. check the ouput of each process::
+
+    process 1 output contains 'Auto-detected process type: PRIMARY'
+    process 2 output contains 'Auto-detected process type: SECONDARY'
+
+Test Case: test_multiprocess_negative_2_primary_process
+=======================================================
+1. start 2 process with queue num set as 4 (proc_id:0~1,queue id:0~3)::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=primary -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=4 --txq=4 --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=primary -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=4 --txq=4 --num-procs=2 --proc-id=1
+
+2. check the ouput of each process::
+
+    process 1 launches successfully
+    process 2 launches failed and output contains 'Is another primary process running?'
+
+Test Case: test_multiprocess_negative_exceed_process_num
+========================================================
+1. start 3 process exceed the specifed num 2::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=primary -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=primary -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=1
+   ./dpdk-testpmd -l 5,6 --proc-type=primary -a 0000:05:01.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=2
+
+2. check the ouput of each process::
+
+    the first and second processes should be launched successfully
+    the third process should be launched failed and output should contain the following string:
+    'multi-process option proc-id(2) should be less than num-procs(2)'
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: add vf multiprocess test case
  2022-05-25  8:52 ` [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: " Jiale Song
@ 2022-05-25  9:03   ` Jiale, SongX
  2022-05-25 11:12   ` lijuan.tu
  1 sibling, 0 replies; 4+ messages in thread
From: Jiale, SongX @ 2022-05-25  9:03 UTC (permalink / raw)
  To: dts

> -----Original Message-----
> From: Jiale, SongX <songx.jiale@intel.com>
> Sent: Wednesday, May 25, 2022 4:52 PM
> To: dts@dpdk.org
> Cc: Jiale, SongX <songx.jiale@intel.com>
> Subject: [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: add vf multiprocess
> test case
> 
> we not have iavf multiprocess cases in dts.
> add new vf multiprocess test suite and 14 new test cases.
> 
> Signed-off-by: Jiale Song <songx.jiale@intel.com>

Acked-by: Peng, Yuan <yuan.peng@intel.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: add vf multiprocess test case
  2022-05-25  8:52 ` [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: " Jiale Song
  2022-05-25  9:03   ` Jiale, SongX
@ 2022-05-25 11:12   ` lijuan.tu
  1 sibling, 0 replies; 4+ messages in thread
From: lijuan.tu @ 2022-05-25 11:12 UTC (permalink / raw)
  To: dts, Jiale Song; +Cc: Jiale Song

On Wed, 25 May 2022 16:52:23 +0800, Jiale Song <songx.jiale@intel.com> wrote:
> we not have iavf multiprocess cases in dts.
> add new vf multiprocess test suite and 14 new test cases.
> 
> Signed-off-by: Jiale Song <songx.jiale@intel.com>


Series applied, thanks

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-05-25 11:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-25  8:52 [dts] [PATCH V5 1/2] tests/multiprocess_iavf: add vf multiprocess test case Jiale Song
2022-05-25  8:52 ` [dts] [PATCH V5 2/2] test_plans/multiprocess_iavf: " Jiale Song
2022-05-25  9:03   ` Jiale, SongX
2022-05-25 11:12   ` lijuan.tu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).