test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH v2 1/2] tests/multiprocess: add new cases according to testplan
@ 2022-12-27 17:35 Song Jiale
  2022-12-27 17:35 ` [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases Song Jiale
  0 siblings, 1 reply; 4+ messages in thread
From: Song Jiale @ 2022-12-27 17:35 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale

add two cases according to testplan.

Signed-off-by: Song Jiale <songx.jiale@intel.com>
---

v2:
-optimization the method of check_port_status.

 tests/TestSuite_multiprocess.py | 98 ++++++++++++++++++++++++++++++++-
 1 file changed, 97 insertions(+), 1 deletion(-)

diff --git a/tests/TestSuite_multiprocess.py b/tests/TestSuite_multiprocess.py
index 4fdc8c27..55e5d555 100644
--- a/tests/TestSuite_multiprocess.py
+++ b/tests/TestSuite_multiprocess.py
@@ -757,6 +757,19 @@ class TestMultiprocess(TestCase):
             "some subcases failed, detail as below:{}".format(msg),
         )
 
+    def check_port_status(self, pmd_output, port_id, status=True):
+        port_status = pmd_output.get_port_link_status(port_id)
+        if status:
+            self.verify(
+                port_status == "up",
+                "The expected link state is up, but the actual status is down",
+            )
+        else:
+            self.verify(
+                port_status == "down",
+                "The expected link state is down, but the actual status is up",
+            )
+
     def test_multiprocess_simple_mpbasicoperation(self):
         """
         Basic operation.
@@ -1691,6 +1704,90 @@ class TestMultiprocess(TestCase):
         }
         self.rte_flow(mac_ipv4_symmetric, self.multiprocess_rss_data, **pmd_param)
 
+    def test_multiprocess_port_stop(self):
+        packets = [
+            'Ether(dst="00:11:22:33:44:55", src="52:00:00:00:00:00")/IP()/Raw(load="P"*20)',
+        ]
+        # start testpmd multi-process
+        self.launch_multi_testpmd(
+            proc_type="auto",
+            queue_num=8,
+            process_num=2,
+        )
+        for pmd_output in self.pmd_output_list:
+            pmd_output.execute_cmd("stop")
+        # set primary process port stop
+        self.pmd_output_list[0].execute_cmd("port stop 0")
+        self.pmd_output_list[1].execute_cmd("start")
+        fdir_pro = fdirprocess(
+            self,
+            self.pmd_output_list[1],
+            self.tester_ifaces,
+            rxq=8,
+        )
+        out = self.send_pkt_get_output(fdir_pro, packets, port_id=0, count=1)
+        # Check that no packet was received
+        self.check_pkt_num(out, port_id=0, pkt_num=0)
+        for pmd_output in self.pmd_output_list:
+            pmd_output.quit()
+
+        # start testpmd multi-process
+        self.launch_multi_testpmd(
+            proc_type="auto",
+            queue_num=8,
+            process_num=2,
+        )
+        for pmd_output in self.pmd_output_list:
+            pmd_output.execute_cmd("stop")
+        # set secondary process port stop
+        self.pmd_output_list[1].execute_cmd("port stop 0")
+        self.pmd_output_list[0].execute_cmd("start")
+        fdir_pro = fdirprocess(
+            self,
+            self.pmd_output_list[0],
+            self.tester_ifaces,
+            rxq=8,
+        )
+        out = self.send_pkt_get_output(fdir_pro, packets, port_id=0, count=1)
+        # Check that one packet was received in primary process
+        self.check_pkt_num(out, port_id=0, pkt_num=len(packets))
+
+    def test_multiprocess_port_reset(self):
+        # start testpmd multi-process
+        self.launch_multi_testpmd(
+            proc_type="auto",
+            queue_num=8,
+            process_num=2,
+        )
+        for pmd_output in self.pmd_output_list:
+            pmd_output.execute_cmd("stop")
+            self.check_port_status(pmd_output, port_id=0, status=True)
+        # set primary process port reset
+        self.pmd_output_list[0].execute_cmd("port stop 0")
+        self.pmd_output_list[0].execute_cmd("port reset 0")
+        # Check that link status of port 0 is 'down' in secondary process and primary process
+        self.check_port_status(self.pmd_output_list[0], port_id=0, status=False)
+        self.check_port_status(self.pmd_output_list[1], port_id=0, status=False)
+
+        for pmd_output in self.pmd_output_list:
+            pmd_output.quit()
+
+        # start testpmd multi-process
+        self.launch_multi_testpmd(
+            proc_type="auto",
+            queue_num=8,
+            process_num=2,
+        )
+        for pmd_output in self.pmd_output_list:
+            pmd_output.execute_cmd("stop")
+            self.check_port_status(pmd_output, port_id=0, status=True)
+        # set secondary process port reset
+        self.pmd_output_list[1].execute_cmd("port stop 0")
+        self.pmd_output_list[1].execute_cmd("port reset 0")
+        # Check that link status of port 0 is 'up' in secondary process and primary process
+        self.check_port_status(self.pmd_output_list[0], port_id=0, status=True)
+        self.check_port_status(self.pmd_output_list[1], port_id=0, status=True)
+
     def test_perf_multiprocess_performance(self):
         """
         Benchmark Multiprocess performance.
@@ -1926,4 +2023,3 @@ class TestMultiprocess(TestCase):
         Run after each test suite.
         """
         self.dut.kill_all()
-        pass
-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases
  2022-12-27 17:35 [dts] [PATCH v2 1/2] tests/multiprocess: add new cases according to testplan Song Jiale
@ 2022-12-27 17:35 ` Song Jiale
  2022-12-28  3:32   ` Ling, Jin
  0 siblings, 1 reply; 4+ messages in thread
From: Song Jiale @ 2022-12-27 17:35 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale, Jin Ling

in DPDK multiprocess, all operations done by the secondary process on the hardware are invalid.
so action 'port stop' and 'port reset' only work in primary process. 
add 2 cases to test it.

Signed-off-by: Jin Ling <jin.ling@intel.com>
---
 test_plans/multiprocess_test_plan.rst | 138 ++++++++++++++++++++++++++
 1 file changed, 138 insertions(+)

diff --git a/test_plans/multiprocess_test_plan.rst b/test_plans/multiprocess_test_plan.rst
index bfef1ca9..6520243e 100644
--- a/test_plans/multiprocess_test_plan.rst
+++ b/test_plans/multiprocess_test_plan.rst
@@ -17,6 +17,9 @@ twice - once as a primary instance, and once as a secondary instance. Messages
 are sent from primary to secondary and vice versa, demonstrating the processes
 are sharing memory and can communicate using rte_ring structures.
 
+In DPDK multprocess, all operations done by the secondary process on the hardware are invalid.
+So action `port stop` and `port reset` only work in primary process.
+
 Prerequisites
 -------------
 
@@ -969,3 +972,138 @@ Test Case: test_multiprocess_negative_exceed_process_num
     the first and second processes should be launched successfully
     the third process should be launched failed and output should contain the following string:
     'multi-process option proc-id(2) should be less than num-procs(2)'
+
+TestCase : test_multiprocess_port_stop
+======================================
+Subcase 1: secondary_port_stop
+------------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start primary process and secondary process with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=1
+
+2. stop fwd in secondary process and start fwd in primary process::
+
+    secondary process:
+      testpmd> port stop 0
+
+    primary process:
+      testpmd> set fwd rxonly
+      testpmd> set verbose 1
+      testpmd> start
+
+3. send 1 packet from scapy::
+
+    >>> sendp([Ether(dst="B4:96:91:BB:64:54", src="52:00:00:00:00:00")/IP()/Raw(load="P"*20)], iface="ens6")
+
+expected result
+~~~~~~~~~~~~~~~
+
+Check that one packet was received::
+
+   primary process:
+      testpmd> port 0/queue 0: received 1 packets
+
+      testpmd> stop
+
+      ---------------------- Forward statistics for port 0  ----------------------
+        RX-packets: 1              RX-dropped: 0             RX-total: 1
+        TX-packets: 0              TX-dropped: 0             TX-total: 0
+      ----------------------------------------------------------------------------
+
+Subcase 2: primary_port_stop
+----------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start 2 process with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=1
+
+2. stop port in primary process and start fwd in secondary::
+
+    primary process:
+      testpmd> port stop 0
+
+    secondary process:
+      testpmd> set fwd rxonly
+      testpmd> set verbose 1
+      testpmd> start
+
+3. send 1 packet from scapy::
+
+    >>> sendp([Ether(dst="B4:96:91:BB:64:54", src="52:00:00:00:00:00")/IP()/Raw(load="P"*20)], iface="ens6")
+
+
+expected result
+~~~~~~~~~~~~~~~
+
+Check that no packet was received::
+
+   secondary process:
+      testpmd> stop
+
+      Telling cores to stop...
+      Waiting for lcores to finish...
+
+      ---------------------- Forward statistics for port 0  ----------------------
+      RX-packets: 0              RX-dropped: 0             RX-total: 0
+      TX-packets: 0              TX-dropped: 0             TX-total: 0
+      ----------------------------------------------------------------------------
+
+TestCase: test_multiprocess_secondary_port_reset
+================================================
+Subcase 1: primary_port_reset
+------------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start 2 process with the following argumentss::
+
+    ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=0
+    ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=1
+
+
+2. reset port in primary::
+
+    primary process:
+      testpmd> port stop 0
+      testpmd> port reset 0
+
+expected result
+~~~~~~~~~~~~~~~
+
+secondary process & primary process::
+
+    testpmd> show port info 0
+
+   Check that link status of port 0 is `down`
+
+Subcase 2: secondary_port_reset
+-------------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start 2 process with the following arguments::
+
+    ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=0
+    ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=1
+
+2. reset port in secondary::
+
+    secondary process:
+      testpmd>port stop 0
+      testpmd>port reset 0
+
+expected result
+~~~~~~~~~~~~~~~
+
+primary process & secondary process::
+
+    testpmd> show port info 0
+
+   Check that link status of port 0 is `up`
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases
  2022-12-27 17:35 ` [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases Song Jiale
@ 2022-12-28  3:32   ` Ling, Jin
  0 siblings, 0 replies; 4+ messages in thread
From: Ling, Jin @ 2022-12-28  3:32 UTC (permalink / raw)
  To: Jiale, SongX, dts



> -----Original Message-----
> From: Jiale, SongX <songx.jiale@intel.com>
> Sent: 2022年12月28日 1:36
> To: dts@dpdk.org
> Cc: Jiale, SongX <songx.jiale@intel.com>; Ling, Jin <jin.ling@intel.com>
> Subject: [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases
> 
> in DPDK multiprocess, all operations done by the secondary process on the
> hardware are invalid.
> so action 'port stop' and 'port reset' only work in primary process.
> add 2 cases to test it.
> 
> Signed-off-by: Jin Ling <jin.ling@intel.com>
> ---
>  test_plans/multiprocess_test_plan.rst | 138 ++++++++++++++++++++++++++
>  1 file changed, 138 insertions(+)
> 
Acked-by: Jin Ling <jin.ling@intel.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases
  2022-12-27 17:33 [dts] [PATCH v2 1/2] tests/multiprocess: add new cases according to testplan Song Jiale
@ 2022-12-27 17:33 ` Song Jiale
  0 siblings, 0 replies; 4+ messages in thread
From: Song Jiale @ 2022-12-27 17:33 UTC (permalink / raw)
  To: dts; +Cc: Song Jiale, Jin Ling

in DPDK multiprocess, all operations done by the secondary process on the hardware are invalid.
so action 'port stop' and 'port reset' only work in primary process. 
add 2 cases to test it.

Signed-off-by: Jin Ling <jin.ling@intel.com>
---
 test_plans/multiprocess_test_plan.rst | 138 ++++++++++++++++++++++++++
 1 file changed, 138 insertions(+)

diff --git a/test_plans/multiprocess_test_plan.rst b/test_plans/multiprocess_test_plan.rst
index bfef1ca9..6520243e 100644
--- a/test_plans/multiprocess_test_plan.rst
+++ b/test_plans/multiprocess_test_plan.rst
@@ -17,6 +17,9 @@ twice - once as a primary instance, and once as a secondary instance. Messages
 are sent from primary to secondary and vice versa, demonstrating the processes
 are sharing memory and can communicate using rte_ring structures.
 
+In DPDK multprocess, all operations done by the secondary process on the hardware are invalid.
+So action `port stop` and `port reset` only work in primary process.
+
 Prerequisites
 -------------
 
@@ -969,3 +972,138 @@ Test Case: test_multiprocess_negative_exceed_process_num
     the first and second processes should be launched successfully
     the third process should be launched failed and output should contain the following string:
     'multi-process option proc-id(2) should be less than num-procs(2)'
+
+TestCase : test_multiprocess_port_stop
+======================================
+Subcase 1: secondary_port_stop
+------------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start primary process and secondary process with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=1
+
+2. stop fwd in secondary process and start fwd in primary process::
+
+    secondary process:
+      testpmd> port stop 0
+
+    primary process:
+      testpmd> set fwd rxonly
+      testpmd> set verbose 1
+      testpmd> start
+
+3. send 1 packet from scapy::
+
+    >>> sendp([Ether(dst="B4:96:91:BB:64:54", src="52:00:00:00:00:00")/IP()/Raw(load="P"*20)], iface="ens6")
+
+expected result
+~~~~~~~~~~~~~~~
+
+Check that one packet was received::
+
+   primary process:
+      testpmd> port 0/queue 0: received 1 packets
+
+      testpmd> stop
+
+      ---------------------- Forward statistics for port 0  ----------------------
+        RX-packets: 1              RX-dropped: 0             RX-total: 1
+        TX-packets: 0              TX-dropped: 0             TX-total: 0
+      ----------------------------------------------------------------------------
+
+Subcase 2: primary_port_stop
+----------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start 2 process with the following arguments::
+
+   ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=0
+   ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8  --num-procs=2 --proc-id=1
+
+2. stop port in primary process and start fwd in secondary::
+
+    primary process:
+      testpmd> port stop 0
+
+    secondary process:
+      testpmd> set fwd rxonly
+      testpmd> set verbose 1
+      testpmd> start
+
+3. send 1 packet from scapy::
+
+    >>> sendp([Ether(dst="B4:96:91:BB:64:54", src="52:00:00:00:00:00")/IP()/Raw(load="P"*20)], iface="ens6")
+
+
+expected result
+~~~~~~~~~~~~~~~
+
+Check that no packet was received::
+
+   secondary process:
+      testpmd> stop
+
+      Telling cores to stop...
+      Waiting for lcores to finish...
+
+      ---------------------- Forward statistics for port 0  ----------------------
+      RX-packets: 0              RX-dropped: 0             RX-total: 0
+      TX-packets: 0              TX-dropped: 0             TX-total: 0
+      ----------------------------------------------------------------------------
+
+TestCase: test_multiprocess_secondary_port_reset
+================================================
+Subcase 1: primary_port_reset
+------------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start 2 process with the following argumentss::
+
+    ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=0
+    ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=1
+
+
+2. reset port in primary::
+
+    primary process:
+      testpmd> port stop 0
+      testpmd> port reset 0
+
+expected result
+~~~~~~~~~~~~~~~
+
+secondary process & primary process::
+
+    testpmd> show port info 0
+
+   Check that link status of port 0 is `down`
+
+Subcase 2: secondary_port_reset
+-------------------------------
+test steps
+~~~~~~~~~~
+
+1. Launch the app ``testpmd``, start 2 process with the following arguments::
+
+    ./dpdk-testpmd -l 1,2 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=0
+    ./dpdk-testpmd -l 3,4 --proc-type=auto -a 0000:17:00.0  --log-level=ice,7 -- -i --rxq=8 --txq=8 --num-procs=2 --proc-id=1
+
+2. reset port in secondary::
+
+    secondary process:
+      testpmd>port stop 0
+      testpmd>port reset 0
+
+expected result
+~~~~~~~~~~~~~~~
+
+primary process & secondary process::
+
+    testpmd> show port info 0
+
+   Check that link status of port 0 is `up`
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-12-28  3:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-27 17:35 [dts] [PATCH v2 1/2] tests/multiprocess: add new cases according to testplan Song Jiale
2022-12-27 17:35 ` [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases Song Jiale
2022-12-28  3:32   ` Ling, Jin
  -- strict thread matches above, loose matches on Subject: below --
2022-12-27 17:33 [dts] [PATCH v2 1/2] tests/multiprocess: add new cases according to testplan Song Jiale
2022-12-27 17:33 ` [dts] [PATCH v2 2/2] test_plans/multiprocess: add 2 cases Song Jiale

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).