test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH V1 0/3] generic_flow_api: add two test cases
@ 2021-08-25 18:30 Lingli Chen
  2021-08-25 18:30 ` [dts] [PATCH V1 1/3] conf/test_case_checklist: add two cases about nic not support Lingli Chen
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Lingli Chen @ 2021-08-25 18:30 UTC (permalink / raw)
  To: dts; +Cc: Lingli Chen

Lingli Chen (3):
  conf/test_case_checklist: add two cases about nic not support
  test_plans/generic_flow_api: add two test cases
  tests/generic_flow_api: add two test cases

 conf/test_case_checklist.json             | 38 ++++++++++++++++++
 test_plans/generic_flow_api_test_plan.rst | 46 ++++++++++++++++++++++
 tests/TestSuite_generic_flow_api.py       | 48 +++++++++++++++++++++++
 3 files changed, 132 insertions(+)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH V1 1/3] conf/test_case_checklist: add two cases about nic not support
  2021-08-25 18:30 [dts] [PATCH V1 0/3] generic_flow_api: add two test cases Lingli Chen
@ 2021-08-25 18:30 ` Lingli Chen
  2021-08-25 18:30 ` [dts] [PATCH V1 2/3] test_plans/generic_flow_api: add two test cases Lingli Chen
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Lingli Chen @ 2021-08-25 18:30 UTC (permalink / raw)
  To: dts; +Cc: Lingli Chen

add two cases about nic not support

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 conf/test_case_checklist.json | 38 +++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/conf/test_case_checklist.json b/conf/test_case_checklist.json
index 4726aac3..8cd47dd1 100644
--- a/conf/test_case_checklist.json
+++ b/conf/test_case_checklist.json
@@ -3689,5 +3689,43 @@
              "Bug ID": "",
              "Comments": "NIC not support this case"
          }
+    ],
+    "create_same_rule_after_destroy": [
+         {
+             "OS": [
+                 "ALL"
+             ],
+             "NIC": [
+                 "sageville",
+                 "sagepond",
+                 "springville",
+                 "powerville",
+                 "foxville"
+             ],
+             "Target": [
+                 "ALL"
+             ],
+             "Bug ID": "",
+             "Comments": "NIC not support this case"
+         }
+    ],
+    "create_different_rule_after_destroy": [
+         {
+             "OS": [
+                 "ALL"
+             ],
+             "NIC": [
+                 "sageville",
+                 "sagepond",
+                 "springville",
+                 "powerville",
+                 "foxville"
+             ],
+             "Target": [
+                 "ALL"
+             ],
+             "Bug ID": "",
+             "Comments": "NIC not support this case"
+         }
     ]
 }
-- 
2.32.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH V1 2/3] test_plans/generic_flow_api: add two test cases
  2021-08-25 18:30 [dts] [PATCH V1 0/3] generic_flow_api: add two test cases Lingli Chen
  2021-08-25 18:30 ` [dts] [PATCH V1 1/3] conf/test_case_checklist: add two cases about nic not support Lingli Chen
@ 2021-08-25 18:30 ` Lingli Chen
  2021-08-30  6:06   ` Lin, Xueqin
  2021-08-25 18:30 ` [dts] [PATCH V1 3/3] tests/generic_flow_api: " Lingli Chen
  2021-08-26  2:14 ` [dts] [PATCH V1 0/3] generic_flow_api: " Chen, LingliX
  3 siblings, 1 reply; 8+ messages in thread
From: Lingli Chen @ 2021-08-25 18:30 UTC (permalink / raw)
  To: dts; +Cc: Lingli Chen

add new Test case: create same rule after destroy/ create different rule after destroy.

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 test_plans/generic_flow_api_test_plan.rst | 46 +++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/test_plans/generic_flow_api_test_plan.rst b/test_plans/generic_flow_api_test_plan.rst
index 71f16187..d1fa33bd 100644
--- a/test_plans/generic_flow_api_test_plan.rst
+++ b/test_plans/generic_flow_api_test_plan.rst
@@ -1996,3 +1996,49 @@ Test case: Dual vlan(QinQ)
 
    3). send packet as step 2 with changed ivlan id, got hash value and queue value that output from the testpmd on DUT, the value should be
    different with the values in step 2 & step 1) & step 2).
+
+Test case: create same rule after destroy
+=========================================
+
+1. Launch the app ``testpmd`` with the following arguments::
+
+        ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -- -i --disable-rss --rxq=16 --txq=16
+        testpmd> set fwd rxonly
+        testpmd> set verbose 1
+        testpmd> start
+
+2. create same rule after destroy::
+
+        testpmd>flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
+        testpmd>flow destroy 0 rule 0
+        testpmd>flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
+
+3. send match and mismatch packets to check if rule work::
+
+    pkt1 = Ether()/IP()/UDP(sport=32)/Raw('x' * 20)
+    pkt2 = Ether()/IP()/UDP(dport=32)/Raw('x' * 20)
+
+    verify match pkt1 to queue 2, verify mismatch pkt2 to queue 0.
+
+Test case: create different rule after destroy
+==============================================
+
+1. Launch the app ``testpmd`` with the following arguments::
+
+        ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -- -i --disable-rss --rxq=16 --txq=16
+        testpmd> set fwd rxonly
+        testpmd> set verbose 1
+        testpmd> start
+
+2. create different rule after destroy::
+
+        testpmd>flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
+        testpmd>flow destroy 0 rule 0
+        testpmd>flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions queue index 2 / end
+
+3. send match and mismatch packets to check if rule work::
+
+    pkt1 = Ether()/IP()/UDP(sport=32)/Raw('x' * 20)
+    pkt2 = Ether()/IP()/UDP(dport=32)/Raw('x' * 20)
+
+    verify match pkt2 to queue 2, verify mismatch pkt1 to queue 0.
-- 
2.32.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dts] [PATCH V1 3/3] tests/generic_flow_api: add two test cases
  2021-08-25 18:30 [dts] [PATCH V1 0/3] generic_flow_api: add two test cases Lingli Chen
  2021-08-25 18:30 ` [dts] [PATCH V1 1/3] conf/test_case_checklist: add two cases about nic not support Lingli Chen
  2021-08-25 18:30 ` [dts] [PATCH V1 2/3] test_plans/generic_flow_api: add two test cases Lingli Chen
@ 2021-08-25 18:30 ` Lingli Chen
  2021-08-30  6:06   ` Lin, Xueqin
  2021-08-26  2:14 ` [dts] [PATCH V1 0/3] generic_flow_api: " Chen, LingliX
  3 siblings, 1 reply; 8+ messages in thread
From: Lingli Chen @ 2021-08-25 18:30 UTC (permalink / raw)
  To: dts; +Cc: Lingli Chen

add two test cases

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 tests/TestSuite_generic_flow_api.py | 48 +++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/tests/TestSuite_generic_flow_api.py b/tests/TestSuite_generic_flow_api.py
index e8171589..f64a5be5 100644
--- a/tests/TestSuite_generic_flow_api.py
+++ b/tests/TestSuite_generic_flow_api.py
@@ -2351,6 +2351,54 @@ class TestGeneric_flow_api(TestCase):
         rule_num = extrapkt_rulenum['rulenum']
         self.verify_rulenum(rule_num)
 
+    def test_create_same_rule_after_destroy(self):
+
+        self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
+        self.dut.send_expect("set fwd rxonly", "testpmd> ", 20)
+        self.dut.send_expect("set verbose 1", "testpmd> ", 20)
+        self.dut.send_expect("start", "testpmd> ", 20)
+        time.sleep(2)
+
+        self.dut.send_expect(
+            "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end", "created")
+
+        out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
+        p = re.compile(r"Flow rule #(\d+) destroyed")
+        m = p.search(out)
+        self.verify(m, "flow rule 0 delete failed" )
+
+        self.dut.send_expect(
+            "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end", "created")
+
+        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' % self.pf_mac)
+        self.verify_result("pf", expect_rxpkts="1", expect_queue="2", verify_mac=self.pf_mac)
+        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' % self.pf_mac)
+        self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac=self.pf_mac)
+
+    def test_create_different_rule_after_destroy(self):
+
+        self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
+        self.dut.send_expect("set fwd rxonly", "testpmd> ", 20)
+        self.dut.send_expect("set verbose 1", "testpmd> ", 20)
+        self.dut.send_expect("start", "testpmd> ", 20)
+        time.sleep(2)
+
+        self.dut.send_expect(
+            "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end", "created")
+
+        out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
+        p = re.compile(r"Flow rule #(\d+) destroyed")
+        m = p.search(out)
+        self.verify(m, "flow rule 0 delete failed" )
+
+        self.dut.send_expect(
+            "flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions queue index 2 / end", "created")
+
+        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' % self.pf_mac)
+        self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac=self.pf_mac)
+        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' % self.pf_mac)
+        self.verify_result("pf", expect_rxpkts="1", expect_queue="2", verify_mac=self.pf_mac)
+
     def tear_down(self):
         """
         Run after each test case.
-- 
2.32.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dts] [PATCH V1 0/3] generic_flow_api: add two test cases
  2021-08-25 18:30 [dts] [PATCH V1 0/3] generic_flow_api: add two test cases Lingli Chen
                   ` (2 preceding siblings ...)
  2021-08-25 18:30 ` [dts] [PATCH V1 3/3] tests/generic_flow_api: " Lingli Chen
@ 2021-08-26  2:14 ` Chen, LingliX
  2021-09-03  4:51   ` Tu, Lijuan
  3 siblings, 1 reply; 8+ messages in thread
From: Chen, LingliX @ 2021-08-26  2:14 UTC (permalink / raw)
  To: dts

[-- Attachment #1: Type: text/plain, Size: 311 bytes --]


> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: Thursday, August 26, 2021 2:31 AM
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts][PATCH V1 0/3] generic_flow_api: add two test cases
> 
Tested-by: Lingli Chen <linglix.chen@intel.com>

[-- Attachment #2: TestGeneric_flow_api.log --]
[-- Type: application/octet-stream, Size: 39048 bytes --]

27/08/2021 13:05:24                            dts: 
TEST SUITE : TestGeneric_flow_api
27/08/2021 13:05:24                            dts: NIC :        fortville_25g
27/08/2021 13:05:24             dut.10.240.183.207: 
27/08/2021 13:05:24                         tester: 
27/08/2021 13:05:27           TestGeneric_flow_api: Test Case test_create_different_rule_after_destroy Begin
27/08/2021 13:05:27             dut.10.240.183.207: 
27/08/2021 13:05:27                         tester: 
27/08/2021 13:05:27             dut.10.240.183.207: kill_all: called by dut and has no prefix list.
27/08/2021 13:05:28             dut.10.240.183.207: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -a 0000:3b:00.0 -a 0000:3b:00.1 --file-prefix=dpdk_31492_20210827130458   -- -i --disable-rss --rxq=16 --txq=16
27/08/2021 13:05:29             dut.10.240.183.207: EAL: Detected 112 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_31492_20210827130458/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:3b:00.0 (socket 0)
i40e_GLQF_reg_init(): i40e device 0000:3b:00.0 changed global register [0x002689a0]. original: 0x00000000, new: 0x00000029 
i40e_GLQF_reg_init(): i40e device 0000:3b:00.0 changed global register [0x00268ca4]. original: 0x00001840, new: 0x00009420 
i40e_aq_debug_write_global_register(): i40e device 0000:3b:00.0 changed global register [0x0026c7a0]. original: 0xa8, after: 0x28
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:3b:00.1 (socket 0)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 3C:FD:FE:D2:6E:64
Configuring Port 1 (socket 0)
Port 1: 3C:FD:FE:D2:6E:65
Checking link statuses...
Done
27/08/2021 13:05:39             dut.10.240.183.207: set fwd rxonly
27/08/2021 13:05:39             dut.10.240.183.207: 
Set rxonly packet forwarding mode
27/08/2021 13:05:39             dut.10.240.183.207: set verbose 1
27/08/2021 13:05:40             dut.10.240.183.207: 
Change verbose level from 0 to 1
27/08/2021 13:05:40             dut.10.240.183.207: start
27/08/2021 13:05:40             dut.10.240.183.207: 
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 0) -> TX P=0/Q=8 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 0) -> TX P=0/Q=9 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 0) -> TX P=0/Q=10 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 0) -> TX P=0/Q=11 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 0) -> TX P=0/Q=12 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 0) -> TX P=0/Q=13 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 0) -> TX P=0/Q=14 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 0) -> TX P=0/Q=15 (socket 0) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
27/08/2021 13:05:42             dut.10.240.183.207: flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
27/08/2021 13:05:42             dut.10.240.183.207: 
27/08/2021 13:05:42             dut.10.240.183.207: flow destroy 0 rule 0
27/08/2021 13:05:42             dut.10.240.183.207: 
Flow rule #0 destroyed
27/08/2021 13:05:42             dut.10.240.183.207: flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions queue index 2 / end
27/08/2021 13:05:42             dut.10.240.183.207: 
27/08/2021 13:05:44             dut.10.240.183.207: 
testpmd> port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:05:44             dut.10.240.183.207: stop
27/08/2021 13:05:44             dut.10.240.183.207: 
Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
27/08/2021 13:05:44           TestGeneric_flow_api: pf: 
testpmd> port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:05:46             dut.10.240.183.207: start
27/08/2021 13:05:46             dut.10.240.183.207: 
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 0) -> TX P=0/Q=8 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 0) -> TX P=0/Q=9 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 0) -> TX P=0/Q=10 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 0) -> TX P=0/Q=11 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 0) -> TX P=0/Q=12 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 0) -> TX P=0/Q=13 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 0) -> TX P=0/Q=14 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 0) -> TX P=0/Q=15 (socket 0) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
27/08/2021 13:05:48             dut.10.240.183.207:  port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:05:48             dut.10.240.183.207: stop
27/08/2021 13:05:48             dut.10.240.183.207: 
Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
27/08/2021 13:05:48           TestGeneric_flow_api: pf:  port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:05:50             dut.10.240.183.207: start
27/08/2021 13:05:50             dut.10.240.183.207: 
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 0) -> TX P=0/Q=8 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 0) -> TX P=0/Q=9 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 0) -> TX P=0/Q=10 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 0) -> TX P=0/Q=11 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 0) -> TX P=0/Q=12 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 0) -> TX P=0/Q=13 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 0) -> TX P=0/Q=14 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 0) -> TX P=0/Q=15 (socket 0) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
27/08/2021 13:05:50           TestGeneric_flow_api: Test Case test_create_different_rule_after_destroy Result PASSED:
27/08/2021 13:05:50             dut.10.240.183.207: quit
27/08/2021 13:05:51             dut.10.240.183.207: 
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...

Port 0: link state change event
Done

Shutting down port 0...
Closing ports...

Port 1: link state change event
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
Port 1 is closed
Done

Bye...
27/08/2021 13:05:53             dut.10.240.183.207: kill_all: called by dut and prefix list has value.
27/08/2021 13:05:54             dut.10.240.183.207: There are some dpdk process not free hugepage
27/08/2021 13:05:54             dut.10.240.183.207: **************************************
27/08/2021 13:05:54             dut.10.240.183.207: lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/125/gvfs
      Output information may be incomplete.
27/08/2021 13:05:54             dut.10.240.183.207: **************************************
27/08/2021 13:05:54           TestGeneric_flow_api: Test Case test_create_same_rule_after_destroy Begin
27/08/2021 13:05:54             dut.10.240.183.207:  
27/08/2021 13:05:54                         tester: 
27/08/2021 13:05:54             dut.10.240.183.207: kill_all: called by dut and has no prefix list.
27/08/2021 13:05:55             dut.10.240.183.207: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -a 0000:3b:00.0 -a 0000:3b:00.1 --file-prefix=dpdk_31492_20210827130458   -- -i --disable-rss --rxq=16 --txq=16
27/08/2021 13:05:56             dut.10.240.183.207: EAL: Detected 112 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_31492_20210827130458/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:3b:00.0 (socket 0)
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:3b:00.1 (socket 0)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 3C:FD:FE:D2:6E:64
Configuring Port 1 (socket 0)
Port 1: 3C:FD:FE:D2:6E:65
Checking link statuses...
Done
27/08/2021 13:06:06             dut.10.240.183.207: set fwd rxonly
27/08/2021 13:06:07             dut.10.240.183.207: 
Set rxonly packet forwarding mode
27/08/2021 13:06:07             dut.10.240.183.207: set verbose 1
27/08/2021 13:06:07             dut.10.240.183.207: 
Change verbose level from 0 to 1
27/08/2021 13:06:07             dut.10.240.183.207: start
27/08/2021 13:06:07             dut.10.240.183.207: 
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 0) -> TX P=0/Q=8 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 0) -> TX P=0/Q=9 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 0) -> TX P=0/Q=10 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 0) -> TX P=0/Q=11 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 0) -> TX P=0/Q=12 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 0) -> TX P=0/Q=13 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 0) -> TX P=0/Q=14 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 0) -> TX P=0/Q=15 (socket 0) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
27/08/2021 13:06:09             dut.10.240.183.207: flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
27/08/2021 13:06:09             dut.10.240.183.207: 
27/08/2021 13:06:09             dut.10.240.183.207: flow destroy 0 rule 0
27/08/2021 13:06:09             dut.10.240.183.207: 
Flow rule #0 destroyed
27/08/2021 13:06:09             dut.10.240.183.207: flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
27/08/2021 13:06:09             dut.10.240.183.207: 
27/08/2021 13:06:11             dut.10.240.183.207: 
testpmd> port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:06:11             dut.10.240.183.207: stop
27/08/2021 13:06:11             dut.10.240.183.207: 
Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
27/08/2021 13:06:11           TestGeneric_flow_api: pf: 
testpmd> port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:06:13             dut.10.240.183.207: start
27/08/2021 13:06:13             dut.10.240.183.207: 
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 0) -> TX P=0/Q=8 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 0) -> TX P=0/Q=9 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 0) -> TX P=0/Q=10 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 0) -> TX P=0/Q=11 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 0) -> TX P=0/Q=12 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 0) -> TX P=0/Q=13 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 0) -> TX P=0/Q=14 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 0) -> TX P=0/Q=15 (socket 0) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
27/08/2021 13:06:15             dut.10.240.183.207:  port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:06:15             dut.10.240.183.207: stop
27/08/2021 13:06:15             dut.10.240.183.207: 
Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
27/08/2021 13:06:15           TestGeneric_flow_api: pf:  port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:D2:6E:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

27/08/2021 13:06:17             dut.10.240.183.207: start
27/08/2021 13:06:17             dut.10.240.183.207: 
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 0) -> TX P=0/Q=8 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 0) -> TX P=0/Q=9 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 0) -> TX P=0/Q=10 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 0) -> TX P=0/Q=11 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 0) -> TX P=0/Q=12 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 0) -> TX P=0/Q=13 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 0) -> TX P=0/Q=14 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 0) -> TX P=0/Q=15 (socket 0) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
27/08/2021 13:06:17           TestGeneric_flow_api: Test Case test_create_same_rule_after_destroy Result PASSED:
27/08/2021 13:06:17             dut.10.240.183.207: quit
27/08/2021 13:06:18             dut.10.240.183.207: 
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...

Port 0: link state change event
Done

Shutting down port 0...
Closing ports...

Port 1: link state change event
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
Port 1 is closed
Done

Bye...
27/08/2021 13:06:20             dut.10.240.183.207: kill_all: called by dut and prefix list has value.
27/08/2021 13:06:21             dut.10.240.183.207: There are some dpdk process not free hugepage
27/08/2021 13:06:21             dut.10.240.183.207: **************************************
27/08/2021 13:06:21             dut.10.240.183.207: lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/125/gvfs
      Output information may be incomplete.
27/08/2021 13:06:21             dut.10.240.183.207: **************************************
27/08/2021 13:06:21                            dts: 
TEST SUITE ENDED: TestGeneric_flow_api

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dts] [PATCH V1 3/3] tests/generic_flow_api: add two test cases
  2021-08-25 18:30 ` [dts] [PATCH V1 3/3] tests/generic_flow_api: " Lingli Chen
@ 2021-08-30  6:06   ` Lin, Xueqin
  0 siblings, 0 replies; 8+ messages in thread
From: Lin, Xueqin @ 2021-08-30  6:06 UTC (permalink / raw)
  To: Chen, LingliX, dts; +Cc: Chen, LingliX


> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Lingli Chen
> Sent: Thursday, August 26, 2021 2:31 AM
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts] [PATCH V1 3/3] tests/generic_flow_api: add two test cases
> 
> add two test cases
> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
Acked-by: Xueqin Lin <xueqin.lin@intel.com>
> ---
>  tests/TestSuite_generic_flow_api.py | 48 +++++++++++++++++++++++++++++
>  1 file changed, 48 insertions(+)
> 
> diff --git a/tests/TestSuite_generic_flow_api.py
> b/tests/TestSuite_generic_flow_api.py
> index e8171589..f64a5be5 100644
> --- a/tests/TestSuite_generic_flow_api.py
> +++ b/tests/TestSuite_generic_flow_api.py
> @@ -2351,6 +2351,54 @@ class TestGeneric_flow_api(TestCase):
>          rule_num = extrapkt_rulenum['rulenum']
>          self.verify_rulenum(rule_num)
> 
> +    def test_create_same_rule_after_destroy(self):
> +
> +        self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --
> txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
> +        self.dut.send_expect("set fwd rxonly", "testpmd> ", 20)
> +        self.dut.send_expect("set verbose 1", "testpmd> ", 20)
> +        self.dut.send_expect("start", "testpmd> ", 20)
> +        time.sleep(2)
> +
> +        self.dut.send_expect(
> +            "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +        out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
> +        p = re.compile(r"Flow rule #(\d+) destroyed")
> +        m = p.search(out)
> +        self.verify(m, "flow rule 0 delete failed" )
> +
> +        self.dut.send_expect(
> +            "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" *
> 20)' % self.pf_mac)
> +        self.verify_result("pf", expect_rxpkts="1", expect_queue="2",
> verify_mac=self.pf_mac)
> +        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" *
> 20)' % self.pf_mac)
> +        self.verify_result("pf", expect_rxpkts="1", expect_queue="0",
> verify_mac=self.pf_mac)
> +
> +    def test_create_different_rule_after_destroy(self):
> +
> +        self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --
> txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
> +        self.dut.send_expect("set fwd rxonly", "testpmd> ", 20)
> +        self.dut.send_expect("set verbose 1", "testpmd> ", 20)
> +        self.dut.send_expect("start", "testpmd> ", 20)
> +        time.sleep(2)
> +
> +        self.dut.send_expect(
> +            "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +        out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
> +        p = re.compile(r"Flow rule #(\d+) destroyed")
> +        m = p.search(out)
> +        self.verify(m, "flow rule 0 delete failed" )
> +
> +        self.dut.send_expect(
> +            "flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions
> queue index 2 / end", "created")
> +
> +        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" *
> 20)' % self.pf_mac)
> +        self.verify_result("pf", expect_rxpkts="1", expect_queue="0",
> verify_mac=self.pf_mac)
> +        self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" *
> 20)' % self.pf_mac)
> +        self.verify_result("pf", expect_rxpkts="1", expect_queue="2",
> verify_mac=self.pf_mac)
> +
>      def tear_down(self):
>          """
>          Run after each test case.
> --
> 2.32.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dts] [PATCH V1 2/3] test_plans/generic_flow_api: add two test cases
  2021-08-25 18:30 ` [dts] [PATCH V1 2/3] test_plans/generic_flow_api: add two test cases Lingli Chen
@ 2021-08-30  6:06   ` Lin, Xueqin
  0 siblings, 0 replies; 8+ messages in thread
From: Lin, Xueqin @ 2021-08-30  6:06 UTC (permalink / raw)
  To: Chen, LingliX, dts; +Cc: Chen, LingliX


> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Lingli Chen
> Sent: Thursday, August 26, 2021 2:31 AM
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts] [PATCH V1 2/3] test_plans/generic_flow_api: add two test
> cases
> 
> add new Test case: create same rule after destroy/ create different rule after
> destroy.
> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
Acked-by: Xueqin Lin <xueqin.lin@intel.com>
> ---
>  test_plans/generic_flow_api_test_plan.rst | 46 +++++++++++++++++++++++
>  1 file changed, 46 insertions(+)
> 
> diff --git a/test_plans/generic_flow_api_test_plan.rst
> b/test_plans/generic_flow_api_test_plan.rst
> index 71f16187..d1fa33bd 100644
> --- a/test_plans/generic_flow_api_test_plan.rst
> +++ b/test_plans/generic_flow_api_test_plan.rst
> @@ -1996,3 +1996,49 @@ Test case: Dual vlan(QinQ)
> 
>     3). send packet as step 2 with changed ivlan id, got hash value and queue
> value that output from the testpmd on DUT, the value should be
>     different with the values in step 2 & step 1) & step 2).
> +
> +Test case: create same rule after destroy
> +=========================================
> +
> +1. Launch the app ``testpmd`` with the following arguments::
> +
> +        ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -
> - -i --disable-rss --rxq=16 --txq=16
> +        testpmd> set fwd rxonly
> +        testpmd> set verbose 1
> +        testpmd> start
> +
> +2. create same rule after destroy::
> +
> +        testpmd>flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end
> actions queue index 2 / end
> +        testpmd>flow destroy 0 rule 0
> +        testpmd>flow create 0 ingress pattern eth / ipv4 / udp src is
> + 32 / end actions queue index 2 / end
> +
> +3. send match and mismatch packets to check if rule work::
> +
> +    pkt1 = Ether()/IP()/UDP(sport=32)/Raw('x' * 20)
> +    pkt2 = Ether()/IP()/UDP(dport=32)/Raw('x' * 20)
> +
> +    verify match pkt1 to queue 2, verify mismatch pkt2 to queue 0.
> +
> +Test case: create different rule after destroy
> +==============================================
> +
> +1. Launch the app ``testpmd`` with the following arguments::
> +
> +        ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -
> - -i --disable-rss --rxq=16 --txq=16
> +        testpmd> set fwd rxonly
> +        testpmd> set verbose 1
> +        testpmd> start
> +
> +2. create different rule after destroy::
> +
> +        testpmd>flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end
> actions queue index 2 / end
> +        testpmd>flow destroy 0 rule 0
> +        testpmd>flow create 0 ingress pattern eth / ipv4 / udp dst is
> + 32 / end actions queue index 2 / end
> +
> +3. send match and mismatch packets to check if rule work::
> +
> +    pkt1 = Ether()/IP()/UDP(sport=32)/Raw('x' * 20)
> +    pkt2 = Ether()/IP()/UDP(dport=32)/Raw('x' * 20)
> +
> +    verify match pkt2 to queue 2, verify mismatch pkt1 to queue 0.
> --
> 2.32.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dts] [PATCH V1 0/3] generic_flow_api: add two test cases
  2021-08-26  2:14 ` [dts] [PATCH V1 0/3] generic_flow_api: " Chen, LingliX
@ 2021-09-03  4:51   ` Tu, Lijuan
  0 siblings, 0 replies; 8+ messages in thread
From: Tu, Lijuan @ 2021-09-03  4:51 UTC (permalink / raw)
  To: Chen, LingliX, dts



> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Chen, LingliX
> Sent: 2021年8月26日 10:15
> To: dts@dpdk.org
> Subject: Re: [dts] [PATCH V1 0/3] generic_flow_api: add two test cases
> 
> 
> > -----Original Message-----
> > From: Chen, LingliX <linglix.chen@intel.com>
> > Sent: Thursday, August 26, 2021 2:31 AM
> > To: dts@dpdk.org
> > Cc: Chen, LingliX <linglix.chen@intel.com>
> > Subject: [dts][PATCH V1 0/3] generic_flow_api: add two test cases
> >
> Tested-by: Lingli Chen <linglix.chen@intel.com>

Applied, thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-09-03  4:51 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-25 18:30 [dts] [PATCH V1 0/3] generic_flow_api: add two test cases Lingli Chen
2021-08-25 18:30 ` [dts] [PATCH V1 1/3] conf/test_case_checklist: add two cases about nic not support Lingli Chen
2021-08-25 18:30 ` [dts] [PATCH V1 2/3] test_plans/generic_flow_api: add two test cases Lingli Chen
2021-08-30  6:06   ` Lin, Xueqin
2021-08-25 18:30 ` [dts] [PATCH V1 3/3] tests/generic_flow_api: " Lingli Chen
2021-08-30  6:06   ` Lin, Xueqin
2021-08-26  2:14 ` [dts] [PATCH V1 0/3] generic_flow_api: " Chen, LingliX
2021-09-03  4:51   ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).