test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V2 0/3] add new feature suite
@ 2021-07-01 13:57 Zhimin Huang
  2021-07-01 13:57 ` [dts] [PATCH V2 1/3] tests/rte_flow_common:add common to process fdir Zhimin Huang
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Zhimin Huang @ 2021-07-01 13:57 UTC (permalink / raw)
  To: dts; +Cc: qi.fu, Zhimin Huang

add rte flow process
add new feature test suite

Zhimin Huang (3):
  tests/rte_flow_common:add common to process fdir
  tests/cvl_ip_fragment_rte_flow:add new feature test suite
  tests/cvl_iavf_ip_fragment_rte_flow:add new feature test suite

 ...TestSuite_cvl_iavf_ip_fragment_rte_flow.py | 498 ++++++++++++++++++
 tests/TestSuite_cvl_ip_fragment_rte_flow.py   | 493 +++++++++++++++++
 tests/rte_flow_common.py                      | 219 +++++++-
 3 files changed, 1207 insertions(+), 3 deletions(-)
 create mode 100644 tests/TestSuite_cvl_iavf_ip_fragment_rte_flow.py
 create mode 100644 tests/TestSuite_cvl_ip_fragment_rte_flow.py

-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts] [PATCH V2 1/3] tests/rte_flow_common:add common to process fdir
  2021-07-01 13:57 [dts] [PATCH V2 0/3] add new feature suite Zhimin Huang
@ 2021-07-01 13:57 ` Zhimin Huang
  2021-07-01 13:57 ` [dts] [PATCH V2 2/3] tests/cvl_ip_fragment_rte_flow:add new feature test suite Zhimin Huang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Zhimin Huang @ 2021-07-01 13:57 UTC (permalink / raw)
  To: dts; +Cc: qi.fu, Zhimin Huang

*.add common class to test fdir

Signed-off-by: Zhimin Huang <zhiminx.huang@intel.com>
---
 tests/rte_flow_common.py | 219 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 216 insertions(+), 3 deletions(-)

diff --git a/tests/rte_flow_common.py b/tests/rte_flow_common.py
index c9f556f7..42afbfd1 100644
--- a/tests/rte_flow_common.py
+++ b/tests/rte_flow_common.py
@@ -674,9 +674,23 @@ def check_pf_rss_queue(out, count):
     else:
         return False
 
+def send_ipfragment_pkt(test_case, pkts, tx_port):
+    if isinstance(pkts, str):
+        pkts = [pkts]
+    for i in range(len(pkts)):
+        test_case.tester.scapy_session.send_expect(
+            'p=eval("{}")'.format(pkts[i]), '>>> ')
+        if 'IPv6ExtHdrFragment' in pkts[i]:
+            test_case.tester.scapy_session.send_expect(
+                'pkts=fragment6(p, 500)', '>>> ')
+        else:
+            test_case.tester.scapy_session.send_expect(
+                'pkts=fragment(p, fragsize=500)', '>>> ')
+        test_case.tester.scapy_session.send_expect(
+            'sendp(pkts, iface="{}")'.format(tx_port), '>>> ')
 
 class RssProcessing(object):
-    def __init__(self, test_case, pmd_output, tester_ifaces, rxq):
+    def __init__(self, test_case, pmd_output, tester_ifaces, rxq, ipfrag_flag=False):
         self.test_case = test_case
         self.pmd_output = pmd_output
         self.tester_ifaces = tester_ifaces
@@ -697,6 +711,7 @@ class RssProcessing(object):
             'check_no_hash': self.check_no_hash,
         }
         self.error_msgs = []
+        self.ipfrag_flag = ipfrag_flag
 
     def save_hash(self, out, key='', port_id=0):
         hashes, rss_distribute = self.get_hash_verify_rss_distribute(out, port_id)
@@ -858,11 +873,15 @@ class RssProcessing(object):
         return hashes, queues
 
     def send_pkt_get_output(self, pkts, port_id=0, count=1, interval=0):
-        self.pkt.update_pkt(pkts)
         tx_port = self.tester_ifaces[0] if port_id == 0 else self.tester_ifaces[1]
         self.logger.info('----------send packet-------------')
         self.logger.info('{}'.format(pkts))
-        self.pkt.send_pkt(crb=self.test_case.tester, tx_port=tx_port, count=count, interval=interval)
+        if self.ipfrag_flag == True:
+            count = 2
+            send_ipfragment_pkt(self.test_case, pkts, tx_port)
+        else:
+            self.pkt.update_pkt(pkts)
+            self.pkt.send_pkt(crb=self.test_case.tester, tx_port=tx_port, count=count, interval=interval)
         out = self.pmd_output.get_output(timeout=1)
         pkt_pattern = 'port\s%d/queue\s\d+:\sreceived\s(\d+)\spackets.+?\n.*length=\d{2,}\s' % port_id
         reveived_data = re.findall(pkt_pattern, out)
@@ -1074,3 +1093,197 @@ class RssProcessing(object):
                               .replace('IP(proto=0x2F)/GRE(proto=0x0800)/IP()', 'IPv6(nh=0x2F)/GRE(proto=0x86DD)/IPv6()').replace('mac_ipv4', 'mac_ipv6'))
                          for element in template]
         return ipv6_template
+
+
+class FdirProcessing(object):
+    def __init__(self, test_case, pmd_output, tester_ifaces, rxq, ipfrag_flag=False):
+        self.test_case = test_case
+        self.pmd_output = pmd_output
+        self.tester_ifaces = tester_ifaces
+        self.logger = test_case.logger
+        self.pkt = Packet()
+        self.rxq = rxq
+        self.verify = self.test_case.verify
+        self.ipfrag_flag = ipfrag_flag
+
+    def send_pkt_get_output(self, pkts, port_id=0, count=1, interval=0, drop=False):
+        tx_port = self.tester_ifaces[0] if port_id == 0 else self.tester_ifaces[1]
+        self.logger.info('----------send packet-------------')
+        self.logger.info('{}'.format(pkts))
+        if drop:
+            self.pmd_output.execute_cmd("clear port stats all")
+            time.sleep(1)
+            if self.ipfrag_flag == True:
+                send_ipfragment_pkt(self.test_case, pkts, tx_port)
+            else:
+                self.pkt.update_pkt(pkts)
+                self.pkt.send_pkt(crb=self.test_case.tester, tx_port=tx_port, count=count, interval=interval)
+            out = self.pmd_output.execute_cmd("stop")
+            self.pmd_output.execute_cmd("start")
+            return out
+        else:
+            if self.ipfrag_flag == True:
+                count = 2
+                send_ipfragment_pkt(self.test_case, pkts, tx_port)
+            else:
+                self.pkt.update_pkt(pkts)
+                self.pkt.send_pkt(crb=self.test_case.tester, tx_port=tx_port, count=count, interval=interval)
+            out = self.pmd_output.get_output(timeout=1)
+        pkt_pattern = 'port\s%d/queue\s\d+:\sreceived\s(\d+)\spackets.+?\n.*length=\d{2,}\s' % port_id
+        reveived_data = re.findall(pkt_pattern, out)
+        reveived_pkts = sum(map(int, [i[0] for i in reveived_data]))
+        if isinstance(pkts, list):
+            self.verify(reveived_pkts == len(pkts) * count,
+                        'expect received %d pkts, but get %d instead' % (len(pkts) * count, reveived_pkts))
+        else:
+            self.verify(reveived_pkts == 1 * count,
+                        'expect received %d pkts, but get %d instead' % (1 * count, reveived_pkts))
+        return out
+
+    def check_rule(self, port_id=0, stats=True, rule_list=None):
+        out = self.pmd_output.execute_cmd("flow list %s" % port_id)
+        p = re.compile(r"ID\s+Group\s+Prio\s+Attr\s+Rule")
+        matched = p.search(out)
+        if stats:
+            self.verify(matched, "flow rule on port %s is not existed" % port_id)
+            if rule_list:
+                p2 = re.compile("^(\d+)\s")
+                li = out.splitlines()
+                res = list(filter(bool, list(map(p2.match, li))))
+                result = [i.group(1) for i in res]
+                self.verify(set(rule_list).issubset(set(result)),
+                            "check rule list failed. expect %s, result %s" % (rule_list, result))
+        else:
+            if matched:
+                if rule_list:
+                    res_li = [i.split()[0].strip() for i in out.splitlines() if re.match('\d', i)]
+                    self.verify(not set(rule_list).issubset(res_li), 'rule specified should not in result.')
+                else:
+                    raise Exception('expect no rule listed')
+            else:
+                self.verify(not matched, "flow rule on port %s is existed" % port_id)
+
+    def destroy_rule(self, port_id=0, rule_id=None):
+        if rule_id is None:
+            rule_id = 0
+        if isinstance(rule_id, list):
+            for i in rule_id:
+                out = self.test_case.dut.send_command("flow destroy %s rule %s" % (port_id, i), timeout=1)
+                p = re.compile(r"Flow rule #(\d+) destroyed")
+                m = p.search(out)
+                self.verify(m, "flow rule %s delete failed" % rule_id)
+        else:
+            out = self.test_case.dut.send_command("flow destroy %s rule %s" % (port_id, rule_id), timeout=1)
+            p = re.compile(r"Flow rule #(\d+) destroyed")
+            m = p.search(out)
+            self.verify(m, "flow rule %s delete failed" % rule_id)
+
+    def create_rule(self, rule: (list, str), check_stats=True, msg=None):
+        p = re.compile(r"Flow rule #(\d+) created")
+        rule_list = list()
+        if isinstance(rule, list):
+            for i in rule:
+                out = self.pmd_output.execute_cmd(i, timeout=1)
+                if msg:
+                    self.verify(msg in out, "failed: expect %s in %s" % (msg, out))
+                m = p.search(out)
+                if m:
+                    rule_list.append(m.group(1))
+                else:
+                    rule_list.append(False)
+        elif isinstance(rule, str):
+            out = self.pmd_output.execute_cmd(rule, timeout=1)
+            if msg:
+                self.verify(msg in out, "failed: expect %s in %s" % (msg, out))
+            m = p.search(out)
+            if m:
+                rule_list.append(m.group(1))
+            else:
+                rule_list.append(False)
+        else:
+            raise Exception("unsupported rule type, only accept list or str")
+        if check_stats:
+            self.verify(all(rule_list), "some rules create failed, result %s" % rule_list)
+        elif not check_stats:
+            self.verify(not any(rule_list), "all rules should create failed, result %s" % rule_list)
+        return rule_list
+
+    def validate_rule(self, rule, check_stats=True, check_msg=None):
+        flag = 'Flow rule validated'
+        if isinstance(rule, str):
+            if 'create' in rule:
+                rule = rule.replace('create', 'validate')
+            out = self.pmd_output.execute_cmd(rule, timeout=1)
+            if check_stats:
+                self.verify(flag in out.strip(), "rule %s validated failed, result %s" % (rule, out))
+            else:
+                if check_msg:
+                    self.verify(flag not in out.strip() and check_msg in out.strip(),
+                                "rule %s validate should failed with msg: %s, but result %s" % (rule, check_msg, out))
+                else:
+                    self.verify(flag not in out.strip(), "rule %s validate should failed, result %s" % (rule, out))
+        elif isinstance(rule, list):
+            for r in rule:
+                if 'create' in r:
+                    r = r.replace('create', 'validate')
+                out = self.pmd_output.execute_cmd(r, timeout=1)
+                if check_stats:
+                    self.verify(flag in out.strip(), "rule %s validated failed, result %s" % (r, out))
+                else:
+                    if not check_msg:
+                        self.verify(flag not in out.strip(), "rule %s validate should failed, result %s" % (r, out))
+                    else:
+                        self.verify(flag not in out.strip() and check_msg in out.strip(),
+                                    "rule %s should validate failed with msg: %s, but result %s" % (
+                                        r, check_msg, out))
+
+    def flow_director_validate(self, vectors):
+        """
+        FDIR test: validate/create rule, check match pkts and unmatched pkts, destroy rule...
+
+        :param vectors: test vectors
+        """
+        test_results = dict()
+        for tv in vectors:
+            try:
+                self.logger.info("====================sub_case: {}=========================".format(tv["name"]))
+                port_id = tv["check_param"]["port_id"] if tv["check_param"].get("port_id") is not None else 0
+                drop = tv["check_param"].get("drop")
+                # create rule
+                self.test_case.dut.send_expect("flow flush %d" % port_id, "testpmd> ", 120)
+                rule_li = self.create_rule(tv["rule"])
+                # send and check match packets
+                out1 = self.send_pkt_get_output(pkts=tv["scapy_str"]["matched"], port_id=port_id, drop=drop)
+                matched_queue = check_mark(out1, pkt_num=len(tv["scapy_str"]["matched"]) * 2 if self.ipfrag_flag else
+                    len(tv["scapy_str"]["matched"]), check_param=tv["check_param"])
+
+                # send and check unmatched packets
+                out2 = self.send_pkt_get_output(pkts=tv["scapy_str"]["unmatched"], port_id=port_id, drop=drop)
+                check_mark(out2, pkt_num=len(tv["scapy_str"]["unmatched"]) * 2 if self.ipfrag_flag else len(
+                    tv["scapy_str"]["unmatched"]), check_param=tv["check_param"], stats=False)
+
+                # list and destroy rule
+                self.check_rule(port_id=tv["check_param"]["port_id"], rule_list=rule_li)
+                self.destroy_rule(rule_id=rule_li, port_id=port_id)
+                # send matched packet
+                out3 = self.send_pkt_get_output(pkts=tv["scapy_str"]["matched"], port_id=port_id, drop=drop)
+                matched_queue2 = check_mark(out3, pkt_num=len(tv["scapy_str"]["matched"]) * 2 if self.ipfrag_flag else len(
+                    tv["scapy_str"]["matched"]), check_param=tv["check_param"], stats=False)
+                if tv["check_param"].get("rss"):
+                    self.verify(matched_queue == matched_queue2 and None not in matched_queue,
+                                     "send twice matched packet, received in deferent queues")
+                # check not rule exists
+                self.check_rule(port_id=port_id, stats=False)
+                test_results[tv["name"]] = True
+                self.logger.info((GREEN("case passed: %s" % tv["name"])))
+            except Exception as e:
+                self.logger.warning((RED(e)))
+                self.test_case.dut.send_command("flow flush 0", timeout=1)
+                test_results[tv["name"]] = False
+                self.logger.info((GREEN("case failed: %s" % tv["name"])))
+                continue
+        failed_cases = []
+        for k, v in list(test_results.items()):
+            if not v:
+                failed_cases.append(k)
+        self.verify(all(test_results.values()), "{} failed".format(failed_cases))
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts] [PATCH V2 2/3] tests/cvl_ip_fragment_rte_flow:add new feature test suite
  2021-07-01 13:57 [dts] [PATCH V2 0/3] add new feature suite Zhimin Huang
  2021-07-01 13:57 ` [dts] [PATCH V2 1/3] tests/rte_flow_common:add common to process fdir Zhimin Huang
@ 2021-07-01 13:57 ` Zhimin Huang
  2021-07-01 13:57 ` [dts] [PATCH V2 3/3] tests/cvl_iavf_ip_fragment_rte_flow:add " Zhimin Huang
  2021-07-05  1:14 ` [dts] [PATCH V2 0/3] add new feature suite Fu, Qi
  3 siblings, 0 replies; 6+ messages in thread
From: Zhimin Huang @ 2021-07-01 13:57 UTC (permalink / raw)
  To: dts; +Cc: qi.fu, Zhimin Huang

*.add new feature cvl ipfragment rte flow test suite

Signed-off-by: Zhimin Huang <zhiminx.huang@intel.com>
---
 tests/TestSuite_cvl_ip_fragment_rte_flow.py | 493 ++++++++++++++++++++
 1 file changed, 493 insertions(+)
 create mode 100644 tests/TestSuite_cvl_ip_fragment_rte_flow.py

diff --git a/tests/TestSuite_cvl_ip_fragment_rte_flow.py b/tests/TestSuite_cvl_ip_fragment_rte_flow.py
new file mode 100644
index 00000000..acf8e78e
--- /dev/null
+++ b/tests/TestSuite_cvl_ip_fragment_rte_flow.py
@@ -0,0 +1,493 @@
+# BSD LICENSE
+#
+# Copyright(c) 2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+from packet import Packet
+from pmd_output import PmdOutput
+from test_case import TestCase
+from utils import GREEN, RED
+import time
+from scapy.all import *
+import rte_flow_common as rfc
+
+LAUNCH_QUEUE = 16
+
+tv_mac_ipv4_frag_fdir_queue_index = {
+    "name": "tv_mac_ipv4_frag_fdir_queue_index",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / mark / end",
+    "scapy_str": {"matched": ["Ether()/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": 1, "mark_id": 0}
+}
+
+tv_mac_ipv4_frag_fdir_rss_queues = {
+    "name": "tv_mac_ipv4_frag_fdir_rss_queues",
+    "rule": ["flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions rss queues 2 3 end / mark / end",
+            "flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4-frag end key_len 0 queues end / end"],
+    "scapy_str": {"matched": ["Ether()/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": [2, 3], "mark_id": 0}
+}
+
+tv_mac_ipv4_frag_fdir_passthru = {
+    "name": "tv_mac_ipv4_frag_fdir_passthru",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions passthru / mark / end",
+    "scapy_str": {"matched": ["Ether()/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0}
+}
+
+tv_mac_ipv4_frag_fdir_drop = {
+    "name": "tv_mac_ipv4_frag_fdir_drop",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions drop / end",
+    "scapy_str": {"matched": ["Ether()/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "drop": True}
+}
+
+tv_mac_ipv4_frag_fdir_mark_rss = {
+    "name": "tv_mac_ipv4_frag_fdir_mark_rss",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions mark / rss / end",
+    "scapy_str": {"matched": ["Ether()/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0, "rss": True}
+}
+
+tv_mac_ipv4_frag_fdir_mark = {
+    "name": "tv_mac_ipv4_frag_fdir_mark",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions mark id 1 / end",
+    "scapy_str": {"matched": ["Ether()/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 1}
+}
+
+tvs_mac_ipv4_fragment_fdir = [
+    tv_mac_ipv4_frag_fdir_queue_index,
+    tv_mac_ipv4_frag_fdir_rss_queues,
+    tv_mac_ipv4_frag_fdir_passthru,
+    tv_mac_ipv4_frag_fdir_drop,
+    tv_mac_ipv4_frag_fdir_mark_rss,
+    tv_mac_ipv4_frag_fdir_mark,
+]
+
+tvs_mac_ipv4_fragment_fdir_l2dst = [eval(str(element).replace('mac_ipv4_frag', 'mac_ipv4_frag_l2dst')
+                                                     .replace('eth / ipv4 packet_id', 'eth dst is 00:00:00:00:00:01 / ipv4 packet_id')
+                                                     .replace("Ether()", "Ether(dst='00:00:00:00:00:01')")
+                                         )
+                                    for element in tvs_mac_ipv4_fragment_fdir]
+
+tvs_mac_ipv4_frag_fdir_with_l2 = tvs_mac_ipv4_fragment_fdir_l2dst
+
+tvs_mac_ipv4_fragment_fdir_l3src = [eval(str(element).replace('mac_ipv4_frag', 'mac_ipv4_frag_l3src')
+                                                     .replace('ipv4 packet_id', 'ipv4 src is 192.168.1.1 packet_id')
+                                                     .replace("IP(id=47750)", "IP(id=47750, src='192.168.1.1')"))
+                                    for element in tvs_mac_ipv4_fragment_fdir]
+
+tvs_mac_ipv4_fragment_fdir_l3dst = [eval(str(element).replace('mac_ipv4_frag', 'mac_ipv4_frag_l3dst')
+                                                     .replace('ipv4 packet_id', 'ipv4 dst is 192.168.1.2 packet_id')
+                                                     .replace("IP(id=47750)", "IP(id=47750, dst='192.168.1.2')"))
+                                    for element in tvs_mac_ipv4_fragment_fdir]
+
+tvs_mac_ipv4_frag_fdir_with_l3 = tvs_mac_ipv4_fragment_fdir_l3src + tvs_mac_ipv4_fragment_fdir_l3dst
+
+tv_mac_ipv6_frag_fdir_queue_index = {
+    "name": "tv_mac_ipv6_frag_fdir_queue_index",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions queue index 1 / mark / end",
+    "scapy_str": {"matched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": 1, "mark_id": 0}
+}
+
+tv_mac_ipv6_frag_fdir_rss_queues = {
+    "name": "tv_mac_ipv6_frag_fdir_rss_queues",
+    "rule": ["flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions rss queues 2 3 end / mark / end",
+            "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end actions rss types ipv6-frag end key_len 0 queues end / end"],
+    "scapy_str": {"matched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": [2, 3], "mark_id": 0}
+}
+
+tv_mac_ipv6_frag_fdir_passthru = {
+    "name": "tv_mac_ipv6_frag_fdir_passthru",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions passthru / mark / end",
+    "scapy_str": {"matched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0}
+}
+
+tv_mac_ipv6_frag_fdir_drop = {
+    "name": "tv_mac_ipv6_frag_fdir_drop",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions drop / end",
+    "scapy_str": {"matched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "drop": True}
+}
+
+tv_mac_ipv6_frag_fdir_mark_rss = {
+    "name": "tv_mac_ipv6_frag_fdir_mark_rss",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions mark / rss / end",
+    "scapy_str": {"matched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0, "rss": True}
+}
+
+tv_mac_ipv6_frag_fdir_mark = {
+    "name": "tv_mac_ipv6_frag_fdir_mark",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions mark id 1 / end",
+    "scapy_str": {"matched": ["Ether()/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether()/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 1}
+}
+
+tvs_mac_ipv6_fragment_fdir = [
+    tv_mac_ipv6_frag_fdir_queue_index,
+    tv_mac_ipv6_frag_fdir_rss_queues,
+    tv_mac_ipv6_frag_fdir_passthru,
+    tv_mac_ipv6_frag_fdir_drop,
+    tv_mac_ipv6_frag_fdir_mark_rss,
+    tv_mac_ipv6_frag_fdir_mark,
+]
+
+tvs_mac_ipv6_fragment_fdir_l2dst = [eval(str(element).replace('mac_ipv6_frag', 'mac_ipv6_frag_l2dst')
+                                                     .replace('eth / ipv6 / ipv6_frag_ext packet_id', 'eth dst is 00:00:00:00:00:01 / ipv6 / ipv6_frag_ext packet_id')
+                                                     .replace("Ether()", "Ether(dst='00:00:00:00:00:01')")
+                                         )
+                                    for element in tvs_mac_ipv6_fragment_fdir]
+
+tvs_mac_ipv6_frag_fdir_with_l2 = tvs_mac_ipv6_fragment_fdir_l2dst
+
+tvs_mac_ipv6_fragment_fdir_l3src = [eval(str(element).replace('mac_ipv6_frag', 'mac_ipv6_frag_l3src')
+                                                     .replace('/ ipv6 /', '/ ipv6 src is 2001::1 /')
+                                                     .replace("IPv6()", "IPv6(src='2001::1')"))
+                                    for element in tvs_mac_ipv6_fragment_fdir]
+
+tvs_mac_ipv6_fragment_fdir_l3dst = [eval(str(element).replace('mac_ipv6_frag', 'mac_ipv6_frag_l3dst')
+                                                     .replace('/ ipv6 /', '/ ipv6 dst is 2001::2 /')
+                                                     .replace("IPv6()", "IPv6(dst='2001::2')"))
+                                    for element in tvs_mac_ipv6_fragment_fdir]
+
+tvs_mac_ipv6_frag_fdir_with_l3 = tvs_mac_ipv6_fragment_fdir_l3src + tvs_mac_ipv6_fragment_fdir_l3dst
+
+tv_rss_basic_packets = {
+    'ipv4_rss_fragment':
+        "Ether(src='00:11:22:33:44:55', dst='00:11:22:33:55:66')/IP(src='192.168.6.11', dst='10.11.12.13', id=47750)/Raw('X'*666)",
+    'ipv6_rss_fragment':
+        "Ether(src='00:11:22:33:44:55', dst='00:11:22:33:55:66')/IPv6(src='CDCD:910A:2222:5498:8475:1111:3900:1537', dst='CDCD:910A:2222:5498:8475:1111:3900:2020')/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"
+}
+
+tv_mac_ipv4_fragment_rss = {
+    'sub_casename': 'tv_mac_ipv4_fragment_rss',
+    'rule': 'flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4-frag end key_len 0 queues end / end',
+    'port_id': 0,
+    'test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'],
+            'action': {'save_hash': 'ipv4'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'].replace('192.168.6.11', '192.168.6.12'),
+            'action': {'check_hash_different': 'ipv4'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'].replace('10.11.12.13', '10.11.12.14'),
+            'action': {'check_hash_different': 'ipv4'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'].replace('id=47750', 'id=47751'),
+            'action': {'check_hash_different': 'ipv4'},
+        },
+        {
+            'send_packet': "Ether()/IPv6()/IPv6ExtHdrFragment(id=47751)/Raw('X'*666)",
+            'action': {'check_no_hash': 'ipv4'},
+        },
+    ],
+    'post-test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'],
+            'action': {'check_no_hash': 'ipv4'},
+        },
+    ]
+}
+
+tv_mac_ipv6_fragment_rss = {
+    'sub_casename': 'tv_mac_ipv6_fragment_rss',
+    'rule': 'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end actions rss types ipv6-frag end key_len 0 queues end / end',
+    'port_id': 0,
+    'test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'],
+            'action': {'save_hash': 'ipv6'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'].replace('3900:1537', '3900:1538'),
+            'action': {'check_hash_different': 'ipv6'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'].replace('3900:2020', '3900:2021'),
+            'action': {'check_hash_different': 'ipv6'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'].replace('id=47750', 'id=47751'),
+            'action': {'check_hash_different': 'ipv6'},
+        },
+        {
+            'send_packet': "Ether()/IP(id=47750)/Raw('X'*666)",
+            'action': {'check_no_hash': 'ipv6'},
+        },
+    ],
+    'post-test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'],
+            'action': {'check_no_hash': 'ipv6'},
+        },
+    ]
+}
+
+
+class TestCvlIpFragmentRteFlow(TestCase):
+    def set_up_all(self):
+        self.ports = self.dut.get_ports(self.nic)
+
+        # init pkt
+        self.pkt = Packet()
+        # set default app parameter
+        self.pmd_out = PmdOutput(self.dut)
+        self.tester_mac = self.tester.get_mac(0)
+        self.tester_port0 = self.tester.get_local_port(self.ports[0])
+        self.tester_iface0 = self.tester.get_interface(self.tester_port0)
+
+        self.tester.send_expect('ifconfig {} up'.format(self.tester_iface0), '# ')
+        self.param = '--rxq={} --txq={} --disable-rss --txd=384 --rxd=384'.format(LAUNCH_QUEUE, LAUNCH_QUEUE)
+        self.param_fdir = '--rxq={} --txq={}'.format(LAUNCH_QUEUE, LAUNCH_QUEUE)
+        self.cores = self.dut.get_core_list("1S/4C/1T")
+
+        self.ports_pci = [self.dut.ports_info[self.ports[0]]['pci']]
+
+        self.rssprocess = rfc.RssProcessing(self, self.pmd_out, [self.tester_iface0], LAUNCH_QUEUE, ipfrag_flag=True)
+        self.fdirprocess = rfc.FdirProcessing(self, self.pmd_out, [self.tester_iface0], LAUNCH_QUEUE, ipfrag_flag=True)
+
+    def set_up(self):
+        self.dut.bind_interfaces_linux('vfio-pci')
+
+    def launch_testpmd(self, param_fdir=False):
+        """
+        start testpmd with fdir or rss param
+
+        :param param_fdir: True: Fdir param/False: rss param
+        """
+        if param_fdir == True:
+            self.pmd_out.start_testpmd(cores=self.cores, ports=self.ports_pci, param=self.param_fdir)
+        else:
+            self.pmd_out.start_testpmd(cores=self.cores, ports=self.ports_pci, param=self.param)
+        self.dut.send_expect("set fwd rxonly", "testpmd> ")
+        self.dut.send_expect("set verbose 1", "testpmd> ")
+        self.dut.send_expect("start", "testpmd> ")
+
+    def tear_down(self):
+        self.dut.send_expect("quit", "# ")
+        self.dut.kill_all()
+
+    def tear_down_all(self):
+        self.dut.kill_all()
+
+    def test_mac_ipv4_frag_fdir(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv4_fragment_fdir)
+
+    def test_mac_ipv6_frag_fdir(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv6_fragment_fdir)
+
+    def test_mac_ipv4_frag_fdir_with_l2(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv4_frag_fdir_with_l2)
+
+    def test_mac_ipv4_frag_fdir_with_l3(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv4_frag_fdir_with_l3)
+
+    def test_mac_ipv6_frag_fdir_with_l2(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv6_frag_fdir_with_l2)
+
+    def test_mac_ipv6_frag_fdir_with_l3(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv6_frag_fdir_with_l3)
+
+    def test_mac_ipv4_frag_rss(self):
+        self.launch_testpmd(param_fdir=False)
+        self.rssprocess.handle_rss_distribute_cases(tv_mac_ipv4_fragment_rss)
+
+    def test_mac_ipv6_frag_rss(self):
+        self.launch_testpmd(param_fdir=False)
+        self.rssprocess.handle_rss_distribute_cases(tv_mac_ipv6_fragment_rss)
+
+    def test_exclusive_validation(self):
+        result = True
+        result_list = []
+        rule_list_fdir = [
+            'flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / ipv4 src is 192.168.0.20 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 2 / end']
+        pkt_fdir = "Ether(dst='00:11:22:33:44:55')/IP(src='192.168.0.20', id=47750)/Raw('X'*666)"
+
+        self.logger.info('Subcase 1: exclusive validation fdir rule')
+        self.launch_testpmd(param_fdir=True)
+        try:
+            self.rssprocess.create_rule(rule_list_fdir)
+        except Exception as e:
+            self.logger.warning('Subcase 1 failed: %s' % e)
+            result = False
+        hashes, queues = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_fdir)
+        for queue in queues:
+            if '0x2' != queue:
+                result = False
+                self.logger.error('Error: queue index {} != 2'.format(queue))
+                continue
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+
+        self.logger.info('Subcase 2: exclusive validation fdir rule')
+        result = True
+        self.launch_testpmd(param_fdir=True)
+        try:
+            self.rssprocess.create_rule(rule_list_fdir[::-1])
+        except Exception as e:
+            self.logger.warning('Subcase 2 failed: %s' % e)
+            result = False
+        hashes, queues = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_fdir)
+        for queue in queues:
+            if '0x2' != queue:
+                result = False
+                self.logger.error('Error: queue index {} != 2'.format(queue))
+                continue
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+
+        self.logger.info('Subcase 3: exclusive validation rss rule')
+        result = True
+        self.launch_testpmd()
+        rule_list_rss = [
+            'flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4 end key_len 0 queues end / end',
+            'flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4-frag end key_len 0 queues end / end']
+        pkt_rss = ["Ether()/IP(id=47750)/Raw('X'*666)",
+               "Ether()/IP(id=47751)/Raw('X'*666)"]
+        try:
+            self.rssprocess.create_rule(rule_list_rss)
+        except Exception as e:
+            self.logger.warning('Subcase 3 failed: %s' % e)
+            result = False
+        hashes1, queues1 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_rss[0])
+        hashes2, queues2 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_rss[1])
+        if hashes1[0] != hashes1[1] and hashes2[0] != hashes2[1]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        if hashes1[0] == hashes2[0]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+
+        self.logger.info('Subcase 4: exclusive validation rss rule')
+        result = True
+        self.launch_testpmd()
+        try:
+            self.rssprocess.create_rule(rule_list_rss[::-1])
+        except Exception as e:
+            self.logger.warning('Subcase 4 failed: %s' % e)
+        hashes1, queues1 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_rss[0])
+        hashes2, queues2 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_rss[1])
+        if hashes1[0] != hashes1[1] and hashes2[0] != hashes2[1]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        if hashes1[0] != hashes2[0]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+        self.verify(all(result_list) is True, 'sub-case failed {}'.format(result_list))
+
+    def test_negative_case(self):
+        negative_rules = [
+            'flow create 0 ingress pattern eth / ipv6 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 300 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 0x10000 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1 fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xf / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 0x10000 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 / end actions queue index 300 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0xfff packet_id last 0x0 packet_id mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id last 0xffff packet_id mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0xfff8 frag_data last 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data last 0xfff8 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 300 / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0xffff packet_id last 0x0 packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0xfff8 frag_data last 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext packet_id is 47750 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 0x10000 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / end actions rss types ipv4-frag end key_len 0 queues end / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext / end actions rss types ipv6-frag end key_len 0 queues end / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end actions rss types ipv4-frag end key_len 0 queues end / end',
+        ]
+        self.launch_testpmd()
+        self.rssprocess.create_rule(negative_rules, check_stats=False)
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts] [PATCH V2 3/3] tests/cvl_iavf_ip_fragment_rte_flow:add new feature test suite
  2021-07-01 13:57 [dts] [PATCH V2 0/3] add new feature suite Zhimin Huang
  2021-07-01 13:57 ` [dts] [PATCH V2 1/3] tests/rte_flow_common:add common to process fdir Zhimin Huang
  2021-07-01 13:57 ` [dts] [PATCH V2 2/3] tests/cvl_ip_fragment_rte_flow:add new feature test suite Zhimin Huang
@ 2021-07-01 13:57 ` Zhimin Huang
  2021-07-05  1:14 ` [dts] [PATCH V2 0/3] add new feature suite Fu, Qi
  3 siblings, 0 replies; 6+ messages in thread
From: Zhimin Huang @ 2021-07-01 13:57 UTC (permalink / raw)
  To: dts; +Cc: qi.fu, Zhimin Huang

*.add new feature cvl iavf ipfragment rte flow test suite

Signed-off-by: Zhimin Huang <zhiminx.huang@intel.com>
---
 ...TestSuite_cvl_iavf_ip_fragment_rte_flow.py | 498 ++++++++++++++++++
 1 file changed, 498 insertions(+)
 create mode 100644 tests/TestSuite_cvl_iavf_ip_fragment_rte_flow.py

diff --git a/tests/TestSuite_cvl_iavf_ip_fragment_rte_flow.py b/tests/TestSuite_cvl_iavf_ip_fragment_rte_flow.py
new file mode 100644
index 00000000..b0e41a1b
--- /dev/null
+++ b/tests/TestSuite_cvl_iavf_ip_fragment_rte_flow.py
@@ -0,0 +1,498 @@
+# BSD LICENSE
+#
+# Copyright(c) 2021 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+from packet import Packet
+from pmd_output import PmdOutput
+from test_case import TestCase
+import re
+from utils import GREEN, RED
+import time
+from scapy.all import *
+import rte_flow_common as rfc
+
+LAUNCH_QUEUE = 16
+
+tv_mac_ipv4_frag_fdir_queue_index = {
+    "name": "tv_mac_ipv4_frag_fdir_queue_index",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / mark / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": 1, "mark_id": 0}
+}
+
+tv_mac_ipv4_frag_fdir_rss_queues = {
+    "name": "tv_mac_ipv4_frag_fdir_rss_queues",
+    "rule": ["flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions rss queues 2 3 end / mark / end",
+            "flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4-frag end key_len 0 queues end / end"],
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": [2, 3], "mark_id": 0}
+}
+
+tv_mac_ipv4_frag_fdir_passthru = {
+    "name": "tv_mac_ipv4_frag_fdir_passthru",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions passthru / mark / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0}
+}
+
+tv_mac_ipv4_frag_fdir_drop = {
+    "name": "tv_mac_ipv4_frag_fdir_drop",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions drop / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "drop": True}
+}
+
+tv_mac_ipv4_frag_fdir_mark_rss = {
+    "name": "tv_mac_ipv4_frag_fdir_mark_rss",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions mark / rss / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0, "rss": True}
+}
+
+tv_mac_ipv4_frag_fdir_mark = {
+    "name": "tv_mac_ipv4_frag_fdir_mark",
+    "rule": "flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions mark id 1 / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 1}
+}
+
+tvs_mac_ipv4_fragment_fdir = [
+    tv_mac_ipv4_frag_fdir_queue_index,
+    tv_mac_ipv4_frag_fdir_rss_queues,
+    tv_mac_ipv4_frag_fdir_passthru,
+    tv_mac_ipv4_frag_fdir_drop,
+    tv_mac_ipv4_frag_fdir_mark_rss,
+    tv_mac_ipv4_frag_fdir_mark,
+]
+
+tvs_mac_ipv4_fragment_fdir_l3src = [eval(str(element).replace('mac_ipv4_frag', 'mac_ipv4_frag_l3src')
+                                                     .replace('ipv4 packet_id', 'ipv4 src is 192.168.1.1 packet_id')
+                                                     .replace("IP(id=47750)", "IP(id=47750, src='192.168.1.1')"))
+                                    for element in tvs_mac_ipv4_fragment_fdir]
+
+tvs_mac_ipv4_fragment_fdir_l3dst = [eval(str(element).replace('mac_ipv4_frag', 'mac_ipv4_frag_l3dst')
+                                                     .replace('ipv4 packet_id', 'ipv4 dst is 192.168.1.2 packet_id')
+                                                     .replace("IP(id=47750)", "IP(id=47750, dst='192.168.1.2')"))
+                                    for element in tvs_mac_ipv4_fragment_fdir]
+
+tvs_mac_ipv4_frag_fdir_with_l3 = tvs_mac_ipv4_fragment_fdir_l3src + tvs_mac_ipv4_fragment_fdir_l3dst
+
+tv_mac_ipv6_frag_fdir_queue_index = {
+    "name": "tv_mac_ipv6_frag_fdir_queue_index",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions queue index 1 / mark / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": 1, "mark_id": 0}
+}
+
+tv_mac_ipv6_frag_fdir_rss_queues = {
+    "name": "tv_mac_ipv6_frag_fdir_rss_queues",
+    "rule": ["flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions rss queues 2 3 end / mark / end",
+            "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end actions rss types ipv6-frag end key_len 0 queues end / end"],
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "queue": [2, 3], "mark_id": 0}
+}
+
+tv_mac_ipv6_frag_fdir_passthru = {
+    "name": "tv_mac_ipv6_frag_fdir_passthru",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions passthru / mark / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0}
+}
+
+tv_mac_ipv6_frag_fdir_drop = {
+    "name": "tv_mac_ipv6_frag_fdir_drop",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions drop / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "drop": True}
+}
+
+tv_mac_ipv6_frag_fdir_mark_rss = {
+    "name": "tv_mac_ipv6_frag_fdir_mark_rss",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions mark / rss / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 0, "rss": True}
+}
+
+tv_mac_ipv6_frag_fdir_mark = {
+    "name": "tv_mac_ipv6_frag_fdir_mark",
+    "rule": "flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffffffff packet_id mask 0xffffffff frag_data spec 0x0001 frag_data last 0xffff frag_data mask 0xffff / end actions mark id 1 / end",
+    "scapy_str": {"matched": ["Ether(dst='00:11:22:33:55:66')/IPv6()/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"],
+                  "unmatched": ["Ether(dst='00:11:22:33:55:66')/IP(id=47750)/Raw('X'*666)"]
+                  },
+    "check_param": {"port_id": 0, "rxq": LAUNCH_QUEUE, "mark_id": 1}
+}
+
+tvs_mac_ipv6_fragment_fdir = [
+    tv_mac_ipv6_frag_fdir_queue_index,
+    tv_mac_ipv6_frag_fdir_rss_queues,
+    tv_mac_ipv6_frag_fdir_passthru,
+    tv_mac_ipv6_frag_fdir_drop,
+    tv_mac_ipv6_frag_fdir_mark_rss,
+    tv_mac_ipv6_frag_fdir_mark,
+]
+
+tvs_mac_ipv6_fragment_fdir_l3src = [eval(str(element).replace('mac_ipv6_frag', 'mac_ipv6_frag_l3src')
+                                                     .replace('/ ipv6 /', '/ ipv6 src is 2001::1 /')
+                                                     .replace("IPv6()", "IPv6(src='2001::1')"))
+                                    for element in tvs_mac_ipv6_fragment_fdir]
+
+tvs_mac_ipv6_fragment_fdir_l3dst = [eval(str(element).replace('mac_ipv6_frag', 'mac_ipv6_frag_l3dst')
+                                                     .replace('/ ipv6 /', '/ ipv6 dst is 2001::2 /')
+                                                     .replace("IPv6()", "IPv6(dst='2001::2')"))
+                                    for element in tvs_mac_ipv6_fragment_fdir]
+
+tvs_mac_ipv6_frag_fdir_with_l3 = tvs_mac_ipv6_fragment_fdir_l3src + tvs_mac_ipv6_fragment_fdir_l3dst
+
+tv_rss_basic_packets = {
+    'ipv4_rss_fragment':
+        "Ether(src='00:11:22:33:44:55', dst='00:11:22:33:55:66')/IP(src='192.168.6.11', dst='10.11.12.13', id=47750)/Raw('X'*666)",
+    'ipv6_rss_fragment':
+        "Ether(src='00:11:22:33:44:55', dst='00:11:22:33:55:66')/IPv6(src='CDCD:910A:2222:5498:8475:1111:3900:1537', dst='CDCD:910A:2222:5498:8475:1111:3900:2020')/IPv6ExtHdrFragment(id=47750)/Raw('X'*666)"
+}
+
+tv_mac_ipv4_fragment_rss = {
+    'sub_casename': 'tv_mac_ipv4_fragment_rss',
+    'rule': 'flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4-frag end key_len 0 queues end / end',
+    'port_id': 0,
+    'test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'],
+            'action': {'save_hash': 'ipv4'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'].replace('192.168.6.11', '192.168.6.12'),
+            'action': {'check_hash_different': 'ipv4'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'].replace('10.11.12.13', '10.11.12.14'),
+            'action': {'check_hash_different': 'ipv4'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'].replace('id=47750', 'id=47751'),
+            'action': {'check_hash_different': 'ipv4'},
+        },
+        {
+            'send_packet': "Ether()/IPv6()/IPv6ExtHdrFragment(id=47751)/Raw('X'*666)",
+            'action': {'check_no_hash': 'ipv4'},
+        },
+    ],
+    'post-test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv4_rss_fragment'],
+            'action': {'check_no_hash': 'ipv4'},
+        },
+    ]
+}
+
+tv_mac_ipv6_fragment_rss = {
+    'sub_casename': 'tv_mac_ipv6_fragment_rss',
+    'rule': 'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end actions rss types ipv6-frag end key_len 0 queues end / end',
+    'port_id': 0,
+    'test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'],
+            'action': {'save_hash': 'ipv6'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'].replace('3900:1537', '3900:1538'),
+            'action': {'check_hash_different': 'ipv6'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'].replace('3900:2020', '3900:2021'),
+            'action': {'check_hash_different': 'ipv6'},
+        },
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'].replace('id=47750', 'id=47751'),
+            'action': {'check_hash_different': 'ipv6'},
+        },
+        {
+            'send_packet': "Ether()/IP(id=47750)/Raw('X'*666)",
+            'action': {'check_no_hash': 'ipv6'},
+        },
+    ],
+    'post-test': [
+        {
+            'send_packet': tv_rss_basic_packets['ipv6_rss_fragment'],
+            'action': {'check_no_hash': 'ipv6'},
+        },
+    ]
+}
+
+
+class TestCvlIavfIpFragmentRteFlow(TestCase):
+    def set_up_all(self):
+        self.ports = self.dut.get_ports(self.nic)
+
+        # init pkt
+        self.pkt = Packet()
+        # set default app parameter
+        self.pmd_out = PmdOutput(self.dut)
+        self.tester_mac = self.tester.get_mac(0)
+        self.tester_port0 = self.tester.get_local_port(self.ports[0])
+        self.tester_iface0 = self.tester.get_interface(self.tester_port0)
+
+        self.tester.send_expect('ifconfig {} up'.format(self.tester_iface0), '# ')
+        self.param = '--rxq={} --txq={} --disable-rss --txd=384 --rxd=384'.format(LAUNCH_QUEUE, LAUNCH_QUEUE)
+        self.param_fdir = '--rxq={} --txq={}'.format(LAUNCH_QUEUE, LAUNCH_QUEUE)
+        self.cores = self.dut.get_core_list("1S/4C/1T")
+        self.setup_1pf_vfs_env()
+
+        self.ports_pci = [self.dut.ports_info[self.ports[0]]['pci']]
+
+        self.rssprocess = rfc.RssProcessing(self, self.pmd_out, [self.tester_iface0], LAUNCH_QUEUE, ipfrag_flag=True)
+        self.fdirprocess = rfc.FdirProcessing(self, self.pmd_out, [self.tester_iface0], LAUNCH_QUEUE, ipfrag_flag=True)
+
+    def set_up(self):
+        pass
+
+    def setup_1pf_vfs_env(self):
+        """
+        create vf and set vf mac
+        """
+        self.dut.bind_interfaces_linux('ice')
+        self.pf_interface = self.dut.ports_info[0]['intf']
+        self.dut.send_expect("ifconfig {} up".format(self.pf_interface), "# ")
+        self.dut.generate_sriov_vfs_by_port(self.ports[0], 1, driver=self.kdriver)
+        self.dut.send_expect('ip link set {} vf 0 mac 00:11:22:33:55:66'.format(self.pf_interface), '# ')
+        self.vf_port = self.dut.ports_info[0]['vfs_port']
+        self.verify(len(self.vf_port) != 0, "VF create failed")
+        self.vf_driver = self.get_suite_cfg()['vf_driver']
+        if self.vf_driver is None:
+            self.vf_assign_method = 'vfio-pci'
+        self.vf_port[0].bind_driver(self.vf_driver)
+
+        self.vf_ports_pci = [self.vf_port[0].pci]
+
+    def launch_testpmd(self, param_fdir=False):
+        """
+        start testpmd with fdir or rss param, and pf or vf
+
+        :param param_fdir: True: Fdir param/False: rss param
+        """
+        if param_fdir == True:
+            self.pmd_out.start_testpmd(cores=self.cores, ports=self.vf_ports_pci, param=self.param_fdir)
+        else:
+            self.pmd_out.start_testpmd(cores=self.cores, ports=self.vf_ports_pci, param=self.param)
+        self.dut.send_expect("set fwd rxonly", "testpmd> ")
+        self.dut.send_expect("set verbose 1", "testpmd> ")
+        self.dut.send_expect("start", "testpmd> ")
+
+    def destroy_testpmd_and_vf(self):
+        """
+        quit testpmd
+        if vf testpmd, destroy the vfs and set vf_flag = false
+        """
+        for port_id in self.ports:
+            self.dut.destroy_sriov_vfs_by_port(port_id)
+
+    def tear_down(self):
+        self.dut.send_expect("quit", "# ")
+        self.dut.kill_all()
+
+    def tear_down_all(self):
+        self.destroy_testpmd_and_vf()
+        self.dut.kill_all()
+
+    def test_iavf_mac_ipv4_frag_fdir(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv4_fragment_fdir)
+
+    def test_iavf_mac_ipv6_frag_fdir(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv6_fragment_fdir)
+
+    def test_iavf_mac_ipv4_frag_fdir_with_l3(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv4_frag_fdir_with_l3)
+
+    def test_iavf_mac_ipv6_frag_fdir_with_l3(self):
+        self.launch_testpmd(param_fdir=True)
+        self.fdirprocess.flow_director_validate(tvs_mac_ipv6_frag_fdir_with_l3)
+
+    def test_iavf_mac_ipv4_frag_rss(self):
+        self.launch_testpmd(param_fdir=False)
+        self.rssprocess.handle_rss_distribute_cases(tv_mac_ipv4_fragment_rss)
+
+    def test_iavf_mac_ipv6_frag_rss(self):
+        self.launch_testpmd(param_fdir=False)
+        self.rssprocess.handle_rss_distribute_cases(tv_mac_ipv6_fragment_rss)
+
+    def test_exclusive_validation(self):
+        result = True
+        result_list = []
+        rule_list_fdir = [
+            'flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 2 / end']
+        pkt_fdir = "Ether()/IP(src='192.168.0.20', id=47750)/Raw('X'*666)"
+
+        self.logger.info('Subcase 1: exclusive validation fdir rule')
+        self.launch_testpmd(param_fdir=True)
+        try:
+            self.rssprocess.create_rule(rule_list_fdir)
+        except Exception as e:
+            self.logger.warning('Subcase 1 failed: %s' % e)
+            result = False
+        hashes, queues = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_fdir)
+        for queue in queues:
+            if '0x2' != queue:
+                result = False
+                self.logger.error('Error: queue index {} != 2'.format(queue))
+                continue
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+
+        self.logger.info('Subcase 2: exclusive validation fdir rule')
+        result = True
+        self.launch_testpmd(param_fdir=True)
+        try:
+            self.rssprocess.create_rule(rule_list_fdir[::-1])
+        except Exception as e:
+            self.logger.warning('Subcase 2 failed: %s' % e)
+            result = False
+        hashes, queues = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_fdir)
+        for queue in queues:
+            if '0x2' != queue:
+                result = False
+                self.logger.error('Error: queue index {} != 2'.format(queue))
+                continue
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+
+        self.logger.info('Subcase 3: exclusive validation rss rule')
+        result = True
+        self.launch_testpmd()
+        rule_list_rss = [
+            'flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4 end key_len 0 queues end / end',
+            'flow create 0 ingress pattern eth / ipv4 / end actions rss types ipv4-frag end key_len 0 queues end / end']
+        pkt_Rss = ["Ether()/IP(id=47750)/Raw('X'*666)",
+               "Ether()/IP(id=47751)/Raw('X'*666)"]
+        try:
+            self.rssprocess.create_rule(rule_list_rss)
+        except Exception as e:
+            self.logger.warning('Subcase 3 failed: %s' % e)
+            result = False
+        hashes1, queues1 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_Rss[0])
+        hashes2, queues2 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_Rss[1])
+        if hashes1[0] != hashes1[1] and hashes2[0] != hashes2[1]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        if hashes1[0] == hashes2[0]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+
+        self.logger.info('Subcase 4: exclusive validation rss rule')
+        result = True
+        self.launch_testpmd()
+        try:
+            self.rssprocess.create_rule(rule_list_rss[::-1])
+        except Exception as e:
+            self.logger.warning('Subcase 3 failed: %s' % e)
+        hashes1, queues1 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_Rss[0])
+        hashes2, queues2 = self.rssprocess.send_pkt_get_hash_queues(pkts=pkt_Rss[1])
+        if hashes1[0] != hashes1[1] and hashes2[0] != hashes2[1]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        if hashes1[0] != hashes2[0]:
+            result = False
+            self.logger.error("hash value is incorrect")
+        result_list.append(result)
+        self.dut.send_expect("quit", "# ")
+        self.logger.info("*********subcase test result %s" % result_list)
+        self.verify(all(result_list) is True, 'sub-case failed {}'.format(result_list))
+
+    def test_negative_case(self):
+        negative_rules = [
+            'flow create 0 ingress pattern eth / ipv6 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 300 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 0x10000 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1 fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff fragment_offset mask 0xf / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset last 0x1fff fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 fragment_offset spec 0x2000 fragment_offset last 0x1fff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 0x10000 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id is 47750 / end actions queue index 300 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0xfff packet_id last 0x0 packet_id mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id last 0xffff packet_id mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 packet_id spec 0 packet_id last 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0xfff8 frag_data last 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data last 0xfff8 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 47750 frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 300 / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0xffff packet_id last 0x0 packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0xfff8 frag_data last 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff frag_data spec 0x0001 frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data last 0xfff8 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data last 0xfff8 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id spec 0 packet_id last 0xffff packet_id mask 0xffff frag_data spec 0x0001 frag_data mask 0xffff / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext packet_id is 47750 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext packet_id is 0x10000 / end actions queue index 1 / end',
+            'flow create 0 ingress pattern eth / ipv6 / end actions rss types ipv4-frag end key_len 0 queues end / end',
+            'flow create 0 ingress pattern eth / ipv4 / ipv6_frag_ext / end actions rss types ipv6-frag end key_len 0 queues end / end',
+            'flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end actions rss types ipv4-frag end key_len 0 queues end / end',
+        ]
+        self.launch_testpmd()
+        self.rssprocess.create_rule(negative_rules, check_stats=False)
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dts] [PATCH V2 0/3] add new feature suite
  2021-07-01 13:57 [dts] [PATCH V2 0/3] add new feature suite Zhimin Huang
                   ` (2 preceding siblings ...)
  2021-07-01 13:57 ` [dts] [PATCH V2 3/3] tests/cvl_iavf_ip_fragment_rte_flow:add " Zhimin Huang
@ 2021-07-05  1:14 ` Fu, Qi
  2021-07-26  6:22   ` Tu, Lijuan
  3 siblings, 1 reply; 6+ messages in thread
From: Fu, Qi @ 2021-07-05  1:14 UTC (permalink / raw)
  To: Huang, ZhiminX, dts


> -----Original Message-----
> From: Huang, ZhiminX <zhiminx.huang@intel.com>
> Sent: Thursday, July 1, 2021 9:58 PM
> To: dts@dpdk.org
> Cc: Fu, Qi <qi.fu@intel.com>; Huang, ZhiminX <zhiminx.huang@intel.com>
> Subject: [dts] [PATCH V2 0/3] add new feature suite
> 
> add rte flow process
> add new feature test suite
> 
> Zhimin Huang (3):
>   tests/rte_flow_common:add common to process fdir
>   tests/cvl_ip_fragment_rte_flow:add new feature test suite
>   tests/cvl_iavf_ip_fragment_rte_flow:add new feature test suite
> 

Acked-by: Fu, Qi <qi.fu@intel.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dts] [PATCH V2 0/3] add new feature suite
  2021-07-05  1:14 ` [dts] [PATCH V2 0/3] add new feature suite Fu, Qi
@ 2021-07-26  6:22   ` Tu, Lijuan
  0 siblings, 0 replies; 6+ messages in thread
From: Tu, Lijuan @ 2021-07-26  6:22 UTC (permalink / raw)
  To: Fu, Qi, Huang, ZhiminX, dts



> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Fu, Qi
> Sent: 2021年7月5日 9:14
> To: Huang, ZhiminX <zhiminx.huang@intel.com>; dts@dpdk.org
> Subject: Re: [dts] [PATCH V2 0/3] add new feature suite
> 
> 
> > -----Original Message-----
> > From: Huang, ZhiminX <zhiminx.huang@intel.com>
> > Sent: Thursday, July 1, 2021 9:58 PM
> > To: dts@dpdk.org
> > Cc: Fu, Qi <qi.fu@intel.com>; Huang, ZhiminX <zhiminx.huang@intel.com>
> > Subject: [dts] [PATCH V2 0/3] add new feature suite
> >
> > add rte flow process
> > add new feature test suite
> >
> > Zhimin Huang (3):
> >   tests/rte_flow_common:add common to process fdir
> >   tests/cvl_ip_fragment_rte_flow:add new feature test suite
> >   tests/cvl_iavf_ip_fragment_rte_flow:add new feature test suite
> >
> 
> Acked-by: Fu, Qi <qi.fu@intel.com>

applied

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-07-26  6:22 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-01 13:57 [dts] [PATCH V2 0/3] add new feature suite Zhimin Huang
2021-07-01 13:57 ` [dts] [PATCH V2 1/3] tests/rte_flow_common:add common to process fdir Zhimin Huang
2021-07-01 13:57 ` [dts] [PATCH V2 2/3] tests/cvl_ip_fragment_rte_flow:add new feature test suite Zhimin Huang
2021-07-01 13:57 ` [dts] [PATCH V2 3/3] tests/cvl_iavf_ip_fragment_rte_flow:add " Zhimin Huang
2021-07-05  1:14 ` [dts] [PATCH V2 0/3] add new feature suite Fu, Qi
2021-07-26  6:22   ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).