test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts][PATCH V1 0/2] modify re-run times from 100 to 10
@ 2023-03-28  8:55 Wei Ling
  2023-03-28  8:55 ` [dts][PATCH V1 1/2] test_plans/virtio_event_idx_interrupt: " Wei Ling
  2023-03-28  8:55 ` [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: " Wei Ling
  0 siblings, 2 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  8:55 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Modify re-run times from 100 to 10 for reduce run time in testplan and
testsuite.

Wei Ling (2):
  test_plans/virtio_event_idx_interrupt: modify re-run times from 100 to
    10
  tests/virtio_event_idx_interrupt: modify re-run times from 100 to 10

 .../virtio_event_idx_interrupt_test_plan.rst  | 30 ++++++---
 tests/TestSuite_virtio_event_idx_interrupt.py | 67 ++++++++-----------
 2 files changed, 49 insertions(+), 48 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V1 1/2] test_plans/virtio_event_idx_interrupt: modify re-run times from 100 to 10
  2023-03-28  8:55 [dts][PATCH V1 0/2] modify re-run times from 100 to 10 Wei Ling
@ 2023-03-28  8:55 ` Wei Ling
  2023-03-28  8:55 ` [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: " Wei Ling
  1 sibling, 0 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  8:55 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Modify re-run times from 100 to 10 for reduce run time.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 .../virtio_event_idx_interrupt_test_plan.rst  | 30 ++++++++++++-------
 1 file changed, 20 insertions(+), 10 deletions(-)

diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst
index ad7c3780..5a121544 100644
--- a/test_plans/virtio_event_idx_interrupt_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst
@@ -23,8 +23,9 @@ Test Case 1: Compare interrupt times with and without split ring virtio event id
 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
-    --nb-cores=1 --txd=1024 --rxd=1024
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost,iface=vhost-net,queues=1' \
+	-- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM::
@@ -56,7 +57,9 @@ Test Case 2: Split ring virtio-pci driver reload test
 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost,iface=vhost-net,queues=1' \
+	-- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM::
@@ -86,7 +89,7 @@ Test Case 2: Split ring virtio-pci driver reload test
     ifconfig [ens3] 1.1.1.2
     tcpdump -i [ens3]
 
-6. Rerun step4 and step5 100 times to check event idx workable after driver reload.
+6. Rerun step4 and step5 10 times to check event idx workable after driver reload.
 
 Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 16 queues test
 =============================================================================================
@@ -94,7 +97,9 @@ Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 1
 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost,iface=vhost-net,queues=16' \
+	-- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd>start
 
 2. Launch VM::
@@ -129,8 +134,9 @@ Test Case 4: Compare interrupt times with and without packed ring virtio event i
 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
-    --nb-cores=1 --txd=1024 --rxd=1024
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost,iface=vhost-net,queues=1' \
+	-- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM::
@@ -162,7 +168,9 @@ Test Case 5: Packed ring virtio-pci driver reload test
 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost,iface=vhost-net,queues=1' \
+	-- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM::
@@ -192,7 +200,7 @@ Test Case 5: Packed ring virtio-pci driver reload test
     ifconfig [ens3] 1.1.1.2
     tcpdump -i [ens3]
 
-6. Rerun step4 and step5 100 times to check event idx workable after driver reload.
+6. Rerun step4 and step5 10 times to check event idx workable after driver reload.
 
 Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode 16 queues test
 ==============================================================================================
@@ -200,7 +208,9 @@ Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode
 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \
+	--vdev 'net_vhost,iface=vhost-net,queues=16' \
+	-- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd>start
 
 2. Launch VM::
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: modify re-run times from 100 to 10
  2023-03-28  8:55 [dts][PATCH V1 0/2] modify re-run times from 100 to 10 Wei Ling
  2023-03-28  8:55 ` [dts][PATCH V1 1/2] test_plans/virtio_event_idx_interrupt: " Wei Ling
@ 2023-03-28  8:55 ` Wei Ling
  2023-04-04  3:13   ` He, Xingguang
  2023-04-11  8:57   ` lijuan.tu
  1 sibling, 2 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  8:55 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

1.Modify re-run times from 100 to 10 for reduce run time.
2.Use the pmd_out API to replace send_expect() to start testpmd.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 tests/TestSuite_virtio_event_idx_interrupt.py | 67 ++++++++-----------
 1 file changed, 29 insertions(+), 38 deletions(-)

diff --git a/tests/TestSuite_virtio_event_idx_interrupt.py b/tests/TestSuite_virtio_event_idx_interrupt.py
index 620cf794..bfc44cb4 100644
--- a/tests/TestSuite_virtio_event_idx_interrupt.py
+++ b/tests/TestSuite_virtio_event_idx_interrupt.py
@@ -2,17 +2,12 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
-"""
-DPDK Test suite.
-Virtio idx interrupt need test with l3fwd-power sample
-"""
-
 import _thread
 import re
 import time
 
-import framework.utils as utils
 from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
 from framework.test_case import TestCase
 from framework.virt_common import VM
 
@@ -22,8 +17,6 @@ class TestVirtioIdxInterrupt(TestCase):
         """
         Run at the start of each test suite.
         """
-        self.queues = 1
-        self.nb_cores = 1
         self.dut_ports = self.dut.get_ports()
         self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing")
         self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
@@ -32,7 +25,7 @@ class TestVirtioIdxInterrupt(TestCase):
         )
         self.dst_mac = self.dut.get_mac_address(self.dut_ports[0])
         self.base_dir = self.dut.base_dir.replace("~", "/root")
-        self.pf_pci = self.dut.ports_info[0]["pci"]
+        self.port_pci = self.dut.ports_info[0]["pci"]
         self.out_path = "/tmp"
         out = self.tester.send_expect("ls -d %s" % self.out_path, "# ")
         if "No such file or directory" in out:
@@ -41,7 +34,8 @@ class TestVirtioIdxInterrupt(TestCase):
         self.pktgen_helper = PacketGeneratorHelper()
         self.app_testpmd_path = self.dut.apps_name["test-pmd"]
         self.testpmd_name = self.app_testpmd_path.split("/")[-1]
-        self.device_str = None
+        self.vhost_user = self.dut.new_session(suite="vhost-user")
+        self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user)
 
     def set_up(self):
         """
@@ -52,7 +46,6 @@ class TestVirtioIdxInterrupt(TestCase):
         self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
         self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
         self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#")
-        self.vhost = self.dut.new_session(suite="vhost")
 
     def get_core_mask(self):
         self.core_config = "1S/%dC/1T" % (self.nb_cores + 1)
@@ -62,39 +55,38 @@ class TestVirtioIdxInterrupt(TestCase):
         )
         self.core_list = self.dut.get_core_list(self.core_config)
 
-    def start_vhost_testpmd(self, dmas=None, mode=False):
+    def start_vhost_testpmd(self):
         """
         start the testpmd on vhost side
         """
-        # get the core mask depend on the nb_cores number
         self.get_core_mask()
-        testcmd = self.app_testpmd_path + " "
-        vdev = [
-            "net_vhost,iface=%s/vhost-net,queues=%d " % (self.base_dir, self.queues)
-        ]
-        eal_params = self.dut.create_eal_parameters(
-            cores=self.core_list, prefix="vhost", ports=[self.pf_pci], vdevs=vdev
+        eal_param = "--vdev 'net_vhost,iface=%s/vhost-net,queues=%d'" % (
+            self.base_dir,
+            self.queues,
         )
-        para = " -- -i --nb-cores=%d --txd=1024 --rxd=1024 --rxq=%d --txq=%d" % (
+        param = "--nb-cores=%d --txd=1024 --rxd=1024 --rxq=%d --txq=%d" % (
             self.nb_cores,
             self.queues,
             self.queues,
         )
-        command_line = testcmd + eal_params + para
-        self.vhost.send_expect(command_line, "testpmd> ", 30)
-        self.vhost.send_expect("start", "testpmd> ", 30)
+        self.vhost_user_pmd.start_testpmd(
+            cores=self.core_list,
+            eal_param=eal_param,
+            param=param,
+            prefix="vhost-user",
+            fixed_prefix=True,
+            ports=[self.port_pci],
+        )
+        self.vhost_user_pmd.execute_cmd("start")
 
-    def start_vms(self, packed=False, mode=False, set_target=False, bind_dev=False):
+    def start_vms(self, packed=False):
         """
         start qemus
         """
         self.vm = VM(self.dut, "vm0", "vhost_sample")
         vm_params = {}
         vm_params["driver"] = "vhost-user"
-        if mode:
-            vm_params["opt_path"] = "%s/vhost-net,%s" % (self.base_dir, mode)
-        else:
-            vm_params["opt_path"] = "%s/vhost-net" % self.base_dir
+        vm_params["opt_path"] = "%s/vhost-net" % self.base_dir
         vm_params["opt_mac"] = "00:11:22:33:44:55"
         opt_args = (
             "mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on"
@@ -107,7 +99,7 @@ class TestVirtioIdxInterrupt(TestCase):
         vm_params["opt_settings"] = opt_args
         self.vm.set_vm_device(**vm_params)
         try:
-            self.vm_dut = self.vm.start(set_target=set_target, bind_dev=bind_dev)
+            self.vm_dut = self.vm.start(set_target=False, bind_dev=False)
             if self.vm_dut is None:
                 raise Exception("Set up VM ENV failed")
         except Exception as e:
@@ -202,7 +194,7 @@ class TestVirtioIdxInterrupt(TestCase):
         """
         check each queue has receive packets on vhost side
         """
-        out = self.vhost.send_expect("stop", "testpmd> ", 60)
+        out = self.vhost_user_pmd.execute_cmd("stop")
         print(out)
         for queue_index in range(0, self.queues):
             queue = re.search("Port= 0/Queue=\s*%d" % queue_index, out)
@@ -217,14 +209,14 @@ class TestVirtioIdxInterrupt(TestCase):
                 "The queue %d rx-packets or tx-packets is 0 about " % queue_index
                 + "rx-packets:%d, tx-packets:%d" % (rx_packets, tx_packets),
             )
-        self.vhost.send_expect("clear port stats all", "testpmd> ", 60)
+        self.vhost_user_pmd.execute_cmd("clear port stats all")
 
     def stop_all_apps(self):
         """
         close all vms
         """
         self.vm.stop()
-        self.vhost.send_expect("quit", "#", 20)
+        self.vhost_user_pmd.quit()
 
     def test_perf_split_ring_virito_pci_driver_reload(self):
         """
@@ -233,9 +225,9 @@ class TestVirtioIdxInterrupt(TestCase):
         self.queues = 1
         self.nb_cores = 1
         self.start_vhost_testpmd()
-        self.start_vms()
+        self.start_vms(packed=False)
         self.config_virito_net_in_vm()
-        res = self.check_packets_after_reload_virtio_device(reload_times=100)
+        res = self.check_packets_after_reload_virtio_device(reload_times=10)
         self.verify(res is True, "Should increase the wait times of ixia")
         self.stop_all_apps()
 
@@ -248,7 +240,7 @@ class TestVirtioIdxInterrupt(TestCase):
         self.queues = 16
         self.nb_cores = 16
         self.start_vhost_testpmd()
-        self.start_vms()
+        self.start_vms(packed=False)
         self.config_virito_net_in_vm()
         self.start_to_send_packets(delay=15)
         self.check_each_queue_has_packets_info_on_vhost()
@@ -263,7 +255,7 @@ class TestVirtioIdxInterrupt(TestCase):
         self.start_vhost_testpmd()
         self.start_vms(packed=True)
         self.config_virito_net_in_vm()
-        res = self.check_packets_after_reload_virtio_device(reload_times=100)
+        res = self.check_packets_after_reload_virtio_device(reload_times=10)
         self.verify(res is True, "Should increase the wait times of ixia")
         self.stop_all_apps()
 
@@ -286,7 +278,6 @@ class TestVirtioIdxInterrupt(TestCase):
         """
         Run after each test case.
         """
-        self.dut.close_session(self.vhost)
         self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#")
         self.dut.send_expect("killall -s INT qemu-system-x86_64", "#")
 
@@ -294,4 +285,4 @@ class TestVirtioIdxInterrupt(TestCase):
         """
         Run after each test suite.
         """
-        pass
+        self.dut.close_session(self.vhost_user)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: modify re-run times from 100 to 10
  2023-03-28  8:55 ` [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: " Wei Ling
@ 2023-04-04  3:13   ` He, Xingguang
  2023-04-11  8:57   ` lijuan.tu
  1 sibling, 0 replies; 6+ messages in thread
From: He, Xingguang @ 2023-04-04  3:13 UTC (permalink / raw)
  To: Ling, WeiX, dts; +Cc: Ling, WeiX

> -----Original Message-----
> From: Wei Ling <weix.ling@intel.com>
> Sent: Tuesday, March 28, 2023 4:56 PM
> To: dts@dpdk.org
> Cc: Ling, WeiX <weix.ling@intel.com>
> Subject: [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: modify re-run
> times from 100 to 10
> 
> 1.Modify re-run times from 100 to 10 for reduce run time.
> 2.Use the pmd_out API to replace send_expect() to start testpmd.
> 
> Signed-off-by: Wei Ling <weix.ling@intel.com>
> ---

Acked-by: Xingguang He<xingguang.he@intel.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: modify re-run times from 100 to 10
  2023-03-28  8:55 ` [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: " Wei Ling
  2023-04-04  3:13   ` He, Xingguang
@ 2023-04-11  8:57   ` lijuan.tu
  1 sibling, 0 replies; 6+ messages in thread
From: lijuan.tu @ 2023-04-11  8:57 UTC (permalink / raw)
  To: dts, Wei Ling; +Cc: Wei Ling

On Tue, 28 Mar 2023 16:55:57 +0800, Wei Ling <weix.ling@intel.com> wrote:
> 1.Modify re-run times from 100 to 10 for reduce run time.
> 2.Use the pmd_out API to replace send_expect() to start testpmd.
> 
> Signed-off-by: Wei Ling <weix.ling@intel.com>


Series applied, thanks

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dts][PATCH V1 0/2] modify re-run times from 100 to 10
@ 2023-03-28  7:40 Wei Ling
  0 siblings, 0 replies; 6+ messages in thread
From: Wei Ling @ 2023-03-28  7:40 UTC (permalink / raw)
  To: dts; +Cc: Wei Ling

Modify re-run times from 100 to 10 for reduce run time in testplan and
testsuite.

Wei Ling (2):
  test_plans/virtio_event_idx_interrupt_cbdma: modify re-run times from
    100 to 10
  tests/virtio_event_idx_interrupt_cbdma: modify re-run times from 100
    to 10

 ...io_event_idx_interrupt_cbdma_test_plan.rst | 122 +++++++++---------
 ...tSuite_virtio_event_idx_interrupt_cbdma.py |   9 +-
 2 files changed, 63 insertions(+), 68 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-04-11  8:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-28  8:55 [dts][PATCH V1 0/2] modify re-run times from 100 to 10 Wei Ling
2023-03-28  8:55 ` [dts][PATCH V1 1/2] test_plans/virtio_event_idx_interrupt: " Wei Ling
2023-03-28  8:55 ` [dts][PATCH V1 2/2] tests/virtio_event_idx_interrupt: " Wei Ling
2023-04-04  3:13   ` He, Xingguang
2023-04-11  8:57   ` lijuan.tu
  -- strict thread matches above, loose matches on Subject: below --
2023-03-28  7:40 [dts][PATCH V1 0/2] " Wei Ling

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).